Compare commits

...

49 Commits

Author SHA1 Message Date
CalDescent
656896d16f Fixed issue causing base block prune/trim heights to not be updated on the final pass. 2021-09-24 21:36:05 +01:00
CalDescent
19bf8afece Fixed bug in pruning phase on node startup
This was causing very recent AT states to be deleted accidentally, because we weren't rebuilding the LatestATStates table before running the query. We should add unit tests to cover this process in case there are any other undiscovered problems.
2021-09-24 20:52:45 +01:00
CalDescent
841b6c4ddf Fixed another issue causing ATStatesHeightIndex to go missing after pruning. 2021-09-24 15:58:09 +01:00
CalDescent
4c171df848 Disable archiving and pruning if the AtStatesHeightIndex is missing, and log it so that the user knows they should bootstrap or resync. 2021-09-24 11:14:18 +01:00
CalDescent
1f79d88840 Fixed errors found in unit tests. 2021-09-24 09:35:36 +01:00
CalDescent
6ee7e9d731 Merge branch 'block-archive' of github.com:Qortal/qortal into block-archive
# Conflicts:
#	src/main/java/org/qortal/controller/Controller.java
2021-09-24 08:50:35 +01:00
CalDescent
4856223838 Fixed error in rebase. 2021-09-24 08:50:00 +01:00
CalDescent
74ea2a847d Added unit tests for trimming, pruning, and archiving. 2021-09-24 08:37:36 +01:00
CalDescent
9813dde3d9 Added importFromArchive() feature
This allows archived blocks to be imported back into HSQLDB in order to make them SQL-compatible again.
2021-09-24 08:37:36 +01:00
CalDescent
fea7b62b9c Fixed some bugs found in unit testing. 2021-09-24 08:37:36 +01:00
CalDescent
37e03bf2bb Removed BLOCK_LIMIT_REACHED result from the block archive writer.
This wasn't needed, and is now instead caught by the NOT_ENOUGH_BLOCKS result.
2021-09-24 08:37:36 +01:00
CalDescent
5656de79a2 Removed maxDuplicatedBlocksWhenArchiving setting as it's no longer needed. 2021-09-24 08:37:36 +01:00
CalDescent
70c6048cc1 Added block archive mode
This takes all trimmed blocks (which should now be all but the last 1450 or so) and moves them into flat files. Each file contains the serialized bytes of as many blocks that can fit within the file size target of 100MiB.

As a result, the HSQLDB size drops to less than 1GB, making it much faster and easier to maintain. It also significantly reduces the total size of each full node, because the data is stored in a highly optimized way.

HSQLDB then works similarly to the way it does in pruning mode - it holds all transactions, the latest state of every AT, as well as the full AT states data and hashes for the past 1450 blocks.

Each archive file contains headers and indexes in order to quickly locate blocks. When a peer requests a block that is within the archive, the serialized bytes are sent directly without the need to go via a BlockData object. Now that there are no slow queries or data serialization processes needed, it should greatly speed up the block serving.

The /block API endpoints have been modified in such a way that they will also check and retrieve blocks from the archive when needed.

A lightweight "BlockArchive" table is needed in HSQLDB to map block heights to signatures minters and timestamps. It made more sense to keep SQL support for these basic attributes of each block. These are located in a separate table from the full blocks, in order to create a clear distinction between HSQLDB blocks and archived blocks, and also to speed up query times in the Blocks table, which is the one we are using 99% of the time.

There is currently a restriction on the /admin/orphan API endpoint to prevent orphaning beyond the threshold of the block archive.
2021-09-24 08:37:36 +01:00
CalDescent
87595fd704 Synchronized LatestATStates, to make rebuildLatestAtStates() thread safe. 2021-09-24 08:37:15 +01:00
CalDescent
dc030a42bb Moved trimming and pruning classes into a single package (org.qortal.controller.repository) 2021-09-24 08:37:15 +01:00
CalDescent
89283ed179 Increased atStatesPruneBatchSize from 10 to 25. 2021-09-24 08:36:52 +01:00
CalDescent
64e8a05a9f Prune ATStatesData as well as the ATStates when switching to pruning mode. 2021-09-24 08:36:52 +01:00
CalDescent
676320586a Updated tests to use the renamed method. 2021-09-24 08:36:52 +01:00
CalDescent
734fa51806 Unified the code to build the LatestATStates table, as it's now used by more than one class.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
2021-09-24 08:36:52 +01:00
CalDescent
f056ecc8d8 Added bulk pruning phase on node startup the first time that pruning mode is enabled.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.

The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.

Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.

From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
2021-09-24 08:36:52 +01:00
CalDescent
1a722c1517 Break out of the AT pruning inner loops if we're stopping the app. 2021-09-24 08:36:51 +01:00
CalDescent
44607ba6a4 Fixed NPE introduced in earlier commit. 2021-09-24 08:36:51 +01:00
CalDescent
01d66212da Updated AT states pruner as it previously relied on blocks being present in the db to make decisions. As a side effect, this now prunes ATs up the the pruneBlockLimit too, rather than keeping the last 35 days or so. Will review this later but I don't think we will need the missing ones. 2021-09-24 08:36:51 +01:00
CalDescent
925e10b19b Rework of Blockchain.validate() to account for pruning mode. 2021-09-24 08:36:51 +01:00
CalDescent
1b4c75a76e Prune all blocks up until the blockPruneLimit
By default, this leaves only the last 1450 blocks in the database. Only applies when pruning mode is enabled.
2021-09-24 08:36:51 +01:00
CalDescent
3400e36ac4 Started work on pruning mode (top-only-sync)
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.

Nodes in pruning mode will be unable to serve older blocks to peers.
2021-09-24 08:36:51 +01:00
CalDescent
ba06225b01 Merge branch 'master' into block-archive 2021-09-12 10:17:11 +01:00
CalDescent
14f6fd19ef Added unit tests for trimming, pruning, and archiving. 2021-09-12 10:13:52 +01:00
CalDescent
1d8351f921 Added importFromArchive() feature
This allows archived blocks to be imported back into HSQLDB in order to make them SQL-compatible again.
2021-09-12 10:10:25 +01:00
CalDescent
6a55b052f5 Fixed some bugs found in unit testing. 2021-09-12 09:57:12 +01:00
CalDescent
2a36b83dea Removed BLOCK_LIMIT_REACHED result from the block archive writer.
This wasn't needed, and is now instead caught by the NOT_ENOUGH_BLOCKS result.
2021-09-12 09:55:49 +01:00
CalDescent
14acc4feb9 Removed maxDuplicatedBlocksWhenArchiving setting as it's no longer needed. 2021-09-12 09:52:28 +01:00
CalDescent
0657ca2969 atStatesMaxLifetime increased to 5 days
For now, we need some headroom to allow for orphaning in the event of a problem. Orphaning currently fails if there is no ATStatesData available (which is the case for trimmed blocks). This could ultimately be solved by retaining older unique states.
2021-09-09 17:46:19 +01:00
CalDescent
703cdfe174 Added block archive mode
This takes all trimmed blocks (which should now be all but the last 1450 or so) and moves them into flat files. Each file contains the serialized bytes of as many blocks that can fit within the file size target of 100MiB.

As a result, the HSQLDB size drops to less than 1GB, making it much faster and easier to maintain. It also significantly reduces the total size of each full node, because the data is stored in a highly optimized way.

HSQLDB then works similarly to the way it does in pruning mode - it holds all transactions, the latest state of every AT, as well as the full AT states data and hashes for the past 1450 blocks.

Each archive file contains headers and indexes in order to quickly locate blocks. When a peer requests a block that is within the archive, the serialized bytes are sent directly without the need to go via a BlockData object. Now that there are no slow queries or data serialization processes needed, it should greatly speed up the block serving.

The /block API endpoints have been modified in such a way that they will also check and retrieve blocks from the archive when needed.

A lightweight "BlockArchive" table is needed in HSQLDB to map block heights to signatures minters and timestamps. It made more sense to keep SQL support for these basic attributes of each block. These are located in a separate table from the full blocks, in order to create a clear distinction between HSQLDB blocks and archived blocks, and also to speed up query times in the Blocks table, which is the one we are using 99% of the time.

There is currently a restriction on the /admin/orphan API endpoint to prevent orphaning beyond the threshold of the block archive.
2021-09-04 19:40:51 +01:00
CalDescent
02988989ad Reduced online account signatures min and max lifetimes
onlineAccountSignaturesMinLifetime reduced from 720 hours to 12 hours
onlineAccountSignaturesMaxLifetime reduced from 888 hours to 24 hours

These were using up too much space in the database and so it makes sense to trim them more aggressively (assuming testing goes well). We will now stop validating online account signatures after 12 hours, which should be more than enough confirmations, and we will discard them after 24 hours.

Note: this will create some complexity once some of the network is running this code. It could cause out-of-sync nodes on old versions to start treating blocks as invalid from updated peers. It's likely not worth the complexity of a hard fork though, given that almost all nodes will be synced to the chain tip and will therefore be unaffected. And even with a hard fork, we'd still face this problem on out of date nodes.
2021-09-03 10:11:02 +01:00
CalDescent
25c17d3704 atStatesMaxLifetime reduced from 14 days to 24 hours 2021-09-03 10:04:04 +01:00
CalDescent
9973fe4326 Synchronized LatestATStates, to make rebuildLatestAtStates() thread safe. 2021-08-28 11:00:49 +01:00
CalDescent
2479f2d65d Moved trimming and pruning classes into a single package (org.qortal.controller.repository) 2021-08-27 09:45:56 +01:00
CalDescent
9056cb7026 Increased atStatesPruneBatchSize from 10 to 25. 2021-08-27 09:45:56 +01:00
CalDescent
cd9d9b31ef Prune ATStatesData as well as the ATStates when switching to pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
ff841c28e3 Updated tests to use the renamed method. 2021-08-27 09:45:56 +01:00
CalDescent
ca1379d9f8 Unified the code to build the LatestATStates table, as it's now used by more than one class.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
2021-08-27 09:45:56 +01:00
CalDescent
5127f94423 Added bulk pruning phase on node startup the first time that pruning mode is enabled.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.

The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.

Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.

From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
2021-08-27 09:45:56 +01:00
CalDescent
f5910ab950 Break out of the AT pruning inner loops if we're stopping the app. 2021-08-27 09:45:56 +01:00
CalDescent
22efaccd4a Fixed NPE introduced in earlier commit. 2021-08-27 09:45:56 +01:00
CalDescent
c8466a2e7a Updated AT states pruner as it previously relied on blocks being present in the db to make decisions. As a side effect, this now prunes ATs up the the pruneBlockLimit too, rather than keeping the last 35 days or so. Will review this later but I don't think we will need the missing ones. 2021-08-27 09:45:56 +01:00
CalDescent
209a9fa8c3 Rework of Blockchain.validate() to account for pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
bc1af12655 Prune all blocks up until the blockPruneLimit
By default, this leaves only the last 1450 blocks in the database. Only applies when pruning mode is enabled.
2021-08-27 09:45:55 +01:00
CalDescent
e7e4cb7579 Started work on pruning mode (top-only-sync)
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.

Nodes in pruning mode will be unable to serve older blocks to peers.
2021-08-27 09:45:55 +01:00
34 changed files with 3423 additions and 310 deletions

View File

@@ -35,6 +35,7 @@ import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.LoggerContext;
import org.apache.logging.log4j.core.appender.RollingFileAppender;
import org.qortal.account.Account;
@@ -67,6 +68,8 @@ import com.google.common.collect.Lists;
@Tag(name = "Admin")
public class AdminResource {
private static final Logger LOGGER = LogManager.getLogger(AdminResource.class);
private static final int MAX_LOG_LINES = 500;
@Context
@@ -459,6 +462,23 @@ public class AdminResource {
if (targetHeight <= 0 || targetHeight > Controller.getInstance().getChainHeight())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_HEIGHT);
// Make sure we're not orphaning as far back as the archived blocks
// FUTURE: we could support this by first importing earlier blocks from the archive
if (Settings.getInstance().isPruningEnabled() ||
Settings.getInstance().isArchiveEnabled()) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Find the first unarchived block
int oldestBlock = repository.getBlockArchiveRepository().getBlockArchiveHeight();
// Add some extra blocks just in case we're currently archiving/pruning
oldestBlock += 100;
if (targetHeight <= oldestBlock) {
LOGGER.info("Unable to orphan beyond block {} because it is archived", oldestBlock);
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_HEIGHT);
}
}
}
if (BlockChain.orphan(targetHeight))
return "true";
else

View File

@@ -15,6 +15,8 @@ import java.math.BigDecimal;
import java.math.BigInteger;
import java.math.RoundingMode;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Comparator;
import java.util.List;
import javax.servlet.http.HttpServletRequest;
@@ -33,11 +35,13 @@ import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.model.BlockMintingInfo;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.block.Block;
import org.qortal.controller.Controller;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
@@ -81,11 +85,19 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Check the database first
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData != null) {
return blockData;
}
return blockData;
// Not found, so try the block archive
blockData = repository.getBlockArchiveRepository().fromSignature(signature);
if (blockData != null) {
return blockData;
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -116,16 +128,24 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Check the database first
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData != null) {
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
return Base58.encode(bytes.toByteArray());
}
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
return Base58.encode(bytes.toByteArray());
// Not found, so try the block archive
byte[] bytes = BlockArchiveReader.getInstance().fetchSerializedBlockBytesForSignature(signature, repository);
if (bytes != null) {
return Base58.encode(bytes);
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
} catch (TransformationException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA, e);
} catch (DataException | IOException e) {
@@ -170,8 +190,12 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
if (repository.getBlockRepository().getHeightFromSignature(signature) == 0)
// Check if the block exists in either the database or archive
if (repository.getBlockRepository().getHeightFromSignature(signature) == 0 &&
repository.getBlockArchiveRepository().getHeightFromSignature(signature) == 0) {
// Not found in either the database or archive
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
return repository.getBlockRepository().getTransactionsFromSignature(signature, limit, offset, reverse);
} catch (DataException e) {
@@ -200,7 +224,19 @@ public class BlocksResource {
})
public BlockData getFirstBlock() {
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getBlockRepository().fromHeight(1);
// Check the database first
BlockData blockData = repository.getBlockRepository().fromHeight(1);
if (blockData != null) {
return blockData;
}
// Try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(1);
if (blockData != null) {
return blockData;
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -262,17 +298,28 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData childBlockData = null;
// Check if block exists in database
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
return repository.getBlockRepository().fromReference(signature);
}
// Check block exists
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
BlockData childBlockData = repository.getBlockRepository().fromReference(signature);
// Not found, so try the archive
// This also checks that the parent block exists
// It will return null if either the parent or child don't exit
childBlockData = repository.getBlockArchiveRepository().fromReference(signature);
// Check child block exists
if (childBlockData == null)
if (childBlockData == null) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
// Check child block's reference matches the supplied signature
if (!Arrays.equals(childBlockData.getReference(), signature)) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
return childBlockData;
} catch (DataException e) {
@@ -338,13 +385,20 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Firstly check the database
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
return blockData.getHeight();
}
// Check block exists
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
// Not found, so try the archive
blockData = repository.getBlockArchiveRepository().fromSignature(signature);
if (blockData != null) {
return blockData.getHeight();
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
return blockData.getHeight();
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -371,11 +425,20 @@ public class BlocksResource {
})
public BlockData getByHeight(@PathParam("height") int height) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Firstly check the database
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData != null) {
return blockData;
}
// Not found, so try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
if (blockData != null) {
return blockData;
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
return blockData;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -402,12 +465,31 @@ public class BlocksResource {
})
public BlockMintingInfo getBlockMintingInfoByHeight(@PathParam("height") int height) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Try the database
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData == null) {
// Not found, so try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
if (blockData == null) {
// Still not found
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
}
Block block = new Block(repository, blockData);
BlockData parentBlockData = repository.getBlockRepository().fromSignature(blockData.getReference());
if (parentBlockData == null) {
// Parent block not found - try the archive
parentBlockData = repository.getBlockArchiveRepository().fromSignature(blockData.getReference());
if (parentBlockData == null) {
// Still not found
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
}
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
if (minterLevel == 0)
// This may be unavailable when requesting a trimmed block
@@ -454,13 +536,26 @@ public class BlocksResource {
})
public BlockData getByTimestamp(@PathParam("timestamp") long timestamp) {
try (final Repository repository = RepositoryManager.getRepository()) {
int height = repository.getBlockRepository().getHeightFromTimestamp(timestamp);
if (height == 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
BlockData blockData = null;
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
// Try the Blocks table
int height = repository.getBlockRepository().getHeightFromTimestamp(timestamp);
if (height > 0) {
// Found match in Blocks table
return repository.getBlockRepository().fromHeight(height);
}
// Not found in Blocks table, so try the archive
height = repository.getBlockArchiveRepository().getHeightFromTimestamp(timestamp);
if (height > 0) {
// Found match in archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
}
// Ensure block exists
if (blockData == null) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
return blockData;
} catch (DataException e) {
@@ -497,9 +592,14 @@ public class BlocksResource {
for (/* count already set */; count > 0; --count, ++height) {
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
// Run out of blocks!
break;
if (blockData == null) {
// Not found - try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
if (blockData == null) {
// Run out of blocks!
break;
}
}
blocks.add(blockData);
}
@@ -544,7 +644,29 @@ public class BlocksResource {
if (accountData == null || accountData.getPublicKey() == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.PUBLIC_KEY_NOT_FOUND);
return repository.getBlockRepository().getBlockSummariesBySigner(accountData.getPublicKey(), limit, offset, reverse);
List<BlockSummaryData> summaries = repository.getBlockRepository()
.getBlockSummariesBySigner(accountData.getPublicKey(), limit, offset, reverse);
// Add any from the archive
List<BlockSummaryData> archivedSummaries = repository.getBlockArchiveRepository()
.getBlockSummariesBySigner(accountData.getPublicKey(), limit, offset, reverse);
if (archivedSummaries != null && !archivedSummaries.isEmpty()) {
summaries.addAll(archivedSummaries);
}
else {
summaries = archivedSummaries;
}
// Sort the results (because they may have been obtained from two places)
if (reverse != null && reverse) {
summaries.sort((s1, s2) -> Integer.valueOf(s2.getHeight()).compareTo(Integer.valueOf(s1.getHeight())));
}
else {
summaries.sort(Comparator.comparing(s -> Integer.valueOf(s.getHeight())));
}
return summaries;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -580,7 +702,8 @@ public class BlocksResource {
if (!Crypto.isValidAddress(address))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
return repository.getBlockRepository().getBlockSigners(addresses, limit, offset, reverse);
// This method pulls data from both Blocks and BlockArchive, so no need to query serparately
return repository.getBlockArchiveRepository().getBlockSigners(addresses, limit, offset, reverse);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -620,7 +743,76 @@ public class BlocksResource {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getBlockRepository().getBlockSummaries(startHeight, endHeight, count);
/*
* start end count result
* 10 40 null blocks 10 to 39 (excludes end block, ignore count)
*
* null null null blocks 1 to 50 (assume count=50, maybe start=1)
* 30 null null blocks 30 to 79 (assume count=50)
* 30 null 10 blocks 30 to 39
*
* null null 50 last 50 blocks? so if max(blocks.height) is 200, then blocks 151 to 200
* null 200 null blocks 150 to 199 (excludes end block, assume count=50)
* null 200 10 blocks 190 to 199 (excludes end block)
*/
List<BlockSummaryData> blockSummaries = new ArrayList<>();
// Use the latest X blocks if only a count is specified
if (startHeight == null && endHeight == null && count != null) {
BlockData chainTip = repository.getBlockRepository().getLastBlock();
startHeight = chainTip.getHeight() - count;
endHeight = chainTip.getHeight();
}
// ... otherwise default the start height to 1
if (startHeight == null && endHeight == null) {
startHeight = 1;
}
// Default the count to 50
if (count == null) {
count = 50;
}
// If both a start and end height exist, ignore the count
if (startHeight != null && endHeight != null) {
if (startHeight > 0 && endHeight > 0) {
count = Integer.MAX_VALUE;
}
}
// Derive start height from end height if missing
if (startHeight == null || startHeight == 0) {
if (endHeight != null && endHeight > 0) {
if (count != null) {
startHeight = endHeight - count;
}
}
}
for (/* count already set */; count > 0; --count, ++startHeight) {
if (endHeight != null && startHeight >= endHeight) {
break;
}
BlockData blockData = repository.getBlockRepository().fromHeight(startHeight);
if (blockData == null) {
// Not found - try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(startHeight);
if (blockData == null) {
// Run out of blocks!
break;
}
}
if (blockData != null) {
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
blockSummaries.add(blockSummaryData);
}
}
return blockSummaries;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}

View File

@@ -506,28 +506,51 @@ public class BlockChain {
* @throws SQLException
*/
public static void validate() throws DataException {
// Check first block is Genesis Block
if (!isGenesisBlockValid())
rebuildBlockchain();
try (final Repository repository = RepositoryManager.getRepository()) {
boolean pruningEnabled = Settings.getInstance().isPruningEnabled();
BlockData chainTip = repository.getBlockRepository().getLastBlock();
boolean hasBlocks = (chainTip != null && chainTip.getHeight() > 1);
if (pruningEnabled && hasBlocks) {
// Pruning is enabled and we have blocks, so it's possible that the genesis block has been pruned
// It's best not to validate it, and there's no real need to
}
else {
// Check first block is Genesis Block
if (!isGenesisBlockValid()) {
rebuildBlockchain();
}
}
repository.checkConsistency();
int startHeight = Math.max(repository.getBlockRepository().getBlockchainHeight() - 1440, 1);
// Set the number of blocks to validate based on the pruned state of the chain
// If pruned, subtract an extra 10 to allow room for error
int blocksToValidate = pruningEnabled ? Settings.getInstance().getPruneBlockLimit() - 10 : 1440;
int startHeight = Math.max(repository.getBlockRepository().getBlockchainHeight() - blocksToValidate, 1);
BlockData detachedBlockData = repository.getBlockRepository().getDetachedBlockSignature(startHeight);
if (detachedBlockData != null) {
LOGGER.error(String.format("Block %d's reference does not match any block's signature", detachedBlockData.getHeight()));
// Wait for blockchain lock (whereas orphan() only tries to get lock)
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lock();
try {
LOGGER.info(String.format("Orphaning back to block %d", detachedBlockData.getHeight() - 1));
orphan(detachedBlockData.getHeight() - 1);
} finally {
blockchainLock.unlock();
// Orphan if we aren't a pruning node
if (!Settings.getInstance().isPruningEnabled()) {
// Wait for blockchain lock (whereas orphan() only tries to get lock)
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lock();
try {
LOGGER.info(String.format("Orphaning back to block %d", detachedBlockData.getHeight() - 1));
orphan(detachedBlockData.getHeight() - 1);
} finally {
blockchainLock.unlock();
}
}
else {
LOGGER.error(String.format("Not orphaning because we are in pruning mode. You may be on an " +
"invalid chain and should consider bootstrapping or re-syncing from genesis."));
}
}
}

View File

@@ -442,7 +442,8 @@ public class BlockMinter extends Thread {
// Add to blockchain
newBlock.process();
LOGGER.info(String.format("Minted new test block: %d", newBlock.getBlockData().getHeight()));
LOGGER.info(String.format("Minted new test block: %d sig: %.8s",
newBlock.getBlockData().getHeight(), Base58.encode(newBlock.getBlockData().getSignature())));
repository.saveChanges();

View File

@@ -24,7 +24,6 @@ import java.util.Properties;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.ReentrantLock;
import java.util.function.Function;
@@ -46,6 +45,7 @@ import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.controller.Synchronizer.SynchronizationResult;
import org.qortal.controller.repository.PruneManager;
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crypto.Crypto;
@@ -84,21 +84,14 @@ import org.qortal.network.message.OnlineAccountsMessage;
import org.qortal.network.message.SignaturesMessage;
import org.qortal.network.message.TransactionMessage;
import org.qortal.network.message.TransactionSignaturesMessage;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryFactory;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.*;
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.transaction.ArbitraryTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.utils.Base58;
import org.qortal.utils.ByteArray;
import org.qortal.utils.DaemonThreadFactory;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import org.qortal.utils.*;
import com.google.common.primitives.Longs;
@@ -358,7 +351,7 @@ public class Controller extends Thread {
return this.savedArgs;
}
/* package */ static boolean isStopping() {
public static boolean isStopping() {
return isStopping;
}
@@ -416,6 +409,8 @@ public class Controller extends Thread {
try {
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(getRepositoryUrl());
RepositoryManager.setRepositoryFactory(repositoryFactory);
RepositoryManager.archive();
RepositoryManager.prune();
} catch (DataException e) {
// If exception has no cause then repository is in use by some other process.
if (e.getCause() == null) {
@@ -516,9 +511,8 @@ public class Controller extends Thread {
final long repositoryBackupInterval = Settings.getInstance().getRepositoryBackupInterval();
final long repositoryCheckpointInterval = Settings.getInstance().getRepositoryCheckpointInterval();
ExecutorService trimExecutor = Executors.newCachedThreadPool(new DaemonThreadFactory());
trimExecutor.execute(new AtStatesTrimmer());
trimExecutor.execute(new OnlineAccountsSignaturesTrimmer());
// Start executor service for trimming or pruning
PruneManager.getInstance().start();
try {
while (!isStopping) {
@@ -603,13 +597,7 @@ public class Controller extends Thread {
Thread.interrupted();
// Fall-through to exit
} finally {
trimExecutor.shutdownNow();
try {
trimExecutor.awaitTermination(2L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// We tried...
}
PruneManager.getInstance().stop();
}
}
@@ -1292,6 +1280,41 @@ public class Controller extends Thread {
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
if (PruneManager.getInstance().isBlockPruned(blockData.getHeight(), repository)) {
// If this is a pruned block, we likely only have partial data, so best not to sent it
blockData = null;
}
}
// If we have no block data, we should check the archive in case it's there
if (blockData == null) {
if (Settings.getInstance().isArchiveEnabled()) {
byte[] bytes = BlockArchiveReader.getInstance().fetchSerializedBlockBytesForSignature(signature, repository);
if (bytes != null) {
CachedBlockMessage blockMessage = new CachedBlockMessage(bytes);
blockMessage.setId(message.getId());
// This call also causes the other needed data to be pulled in from repository
if (!peer.sendMessage(blockMessage)) {
peer.disconnect("failed to send block");
// Don't fall-through to caching because failure to send might be from failure to build message
return;
}
// If request is for a recent block, cache it
if (getChainHeight() - blockData.getHeight() <= blockCacheSize) {
this.stats.getBlockMessageStats.cacheFills.incrementAndGet();
this.blockMessageCache.put(new ByteArray(blockData.getSignature()), blockMessage);
}
// Sent successfully from archive, so nothing more to do
return;
}
}
}
if (blockData == null) {
// We don't have this block
this.stats.getBlockMessageStats.unknownBlocks.getAndIncrement();
@@ -1413,6 +1436,14 @@ public class Controller extends Thread {
BlockData blockData = repository.getBlockRepository().fromReference(parentSignature);
if (blockData != null) {
if (PruneManager.getInstance().isBlockPruned(blockData.getHeight(), repository)) {
// If this request contains a pruned block, we likely only have partial data, so best not to sent anything
// We always prune from the oldest first, so it's fine to just check the first block requested
blockData = null;
}
}
while (blockData != null && blockSummaries.size() < numberRequested) {
BlockSummaryData blockSummary = new BlockSummaryData(blockData);
blockSummaries.add(blockSummary);

View File

@@ -0,0 +1,108 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
public class AtStatesPruner implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(AtStatesPruner.class);
@Override
public void run() {
Thread.currentThread().setName("AT States pruner");
boolean archiveMode = false;
if (!Settings.getInstance().isPruningEnabled()) {
// Pruning isn't enabled, but we might want to prune for the purposes of archiving
if (!Settings.getInstance().isArchiveEnabled()) {
// No pruning or archiving, so we must not prune anything
return;
}
else {
// We're allowed to prune blocks that have already been archived
archiveMode = true;
}
}
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getATRepository().getAtPruneHeight();
repository.discardChanges();
repository.getATRepository().rebuildLatestAtStates();
while (!Controller.isStopping()) {
repository.discardChanges();
Thread.sleep(Settings.getInstance().getAtStatesPruneInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null)
continue;
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing())
continue;
// Prune AT states for all blocks up until our latest minus pruneBlockLimit
final int ourLatestHeight = chainTip.getHeight();
int upperPrunableHeight = ourLatestHeight - Settings.getInstance().getPruneBlockLimit();
// In archive mode we are only allowed to trim blocks that have already been archived
if (archiveMode) {
upperPrunableHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1;
// TODO: validate that the actual archived data exists before pruning it?
}
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getAtStatesPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
if (pruneStartHeight >= upperPruneHeight)
continue;
LOGGER.debug(String.format("Pruning AT states between blocks %d and %d...", pruneStartHeight, upperPruneHeight));
int numAtStatesPruned = repository.getATRepository().pruneAtStates(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
int numAtStateDataRowsTrimmed = repository.getATRepository().trimAtStates(
pruneStartHeight, upperPruneHeight, Settings.getInstance().getAtStatesTrimLimit());
repository.saveChanges();
if (numAtStatesPruned > 0 || numAtStateDataRowsTrimmed > 0) {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Pruned %d AT state%s between blocks %d and %d",
numAtStatesPruned, (numAtStatesPruned != 1 ? "s" : ""),
finalPruneStartHeight, upperPruneHeight));
} else {
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
repository.getATRepository().setAtPruneHeight(pruneStartHeight);
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Bumping AT state base prune height to %d", finalPruneStartHeight));
}
else {
// We've pruned up to the upper prunable height
// Back off for a while to save CPU for syncing
Thread.sleep(5*60*1000L);
}
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to prune AT states: %s", e.getMessage()));
} catch (InterruptedException e) {
// Time to exit
}
}
}

View File

@@ -1,7 +1,8 @@
package org.qortal.controller;
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -20,8 +21,8 @@ public class AtStatesTrimmer implements Runnable {
try (final Repository repository = RepositoryManager.getRepository()) {
int trimStartHeight = repository.getATRepository().getAtTrimHeight();
repository.getATRepository().prepareForAtStateTrimming();
repository.saveChanges();
repository.discardChanges();
repository.getATRepository().rebuildLatestAtStates();
while (!Controller.isStopping()) {
repository.discardChanges();
@@ -62,7 +63,7 @@ public class AtStatesTrimmer implements Runnable {
if (upperTrimmableHeight > upperBatchHeight) {
trimStartHeight = upperBatchHeight;
repository.getATRepository().setAtTrimHeight(trimStartHeight);
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
final int finalTrimStartHeight = trimStartHeight;

View File

@@ -0,0 +1,111 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.*;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.utils.NTP;
import java.io.IOException;
public class BlockArchiver implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(BlockArchiver.class);
private static final long INITIAL_SLEEP_PERIOD = 0L; // TODO: 5 * 60 * 1000L + 1234L; // ms
public void run() {
Thread.currentThread().setName("Block archiver");
if (!Settings.getInstance().isArchiveEnabled()) {
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
int startHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
// Don't attempt to archive if we have no ATStatesHeightIndex, as it will be too slow
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
if (!hasAtStatesHeightIndex) {
LOGGER.info("Unable to start block archiver due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
return;
}
// Don't even start building until initial rush has ended
Thread.sleep(INITIAL_SLEEP_PERIOD);
LOGGER.info("Starting block archiver...");
while (!Controller.isStopping()) {
repository.discardChanges();
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
Thread.sleep(Settings.getInstance().getArchiveInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null) {
continue;
}
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing()) {
continue;
}
// Don't attempt to archive if we're not synced yet
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null || chainTip.getTimestamp() < minLatestBlockTimestamp) {
continue;
}
// Build cache of blocks
try {
BlockArchiveWriter writer = new BlockArchiveWriter(startHeight, maximumArchiveHeight, repository);
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
switch (result) {
case OK:
// Increment block archive height
startHeight += writer.getWrittenCount();
repository.getBlockArchiveRepository().setBlockArchiveHeight(startHeight);
repository.saveChanges();
break;
case STOPPING:
return;
// We've reached the limit of the blocks we can archive
// Sleep for a while to allow more to become available
case NOT_ENOUGH_BLOCKS:
// We didn't reach our file size target, so that must mean that we don't have enough blocks
// yet or something went wrong. Sleep for a while and then try again.
Thread.sleep(60 * 60 * 1000L); // 1 hour
break;
case BLOCK_NOT_FOUND:
// We tried to archive a block that didn't exist. This is a major failure and likely means
// that a bootstrap or re-sync is needed. Try again every minute until then.
LOGGER.info("Error: block not found when building archive. If this error persists, " +
"a bootstrap or re-sync may be needed.");
Thread.sleep( 60 * 1000L); // 1 minute
break;
}
} catch (IOException | TransformationException e) {
LOGGER.info("Caught exception when creating block cache", e);
}
}
} catch (DataException e) {
LOGGER.info("Caught exception when creating block cache", e);
} catch (InterruptedException e) {
// Do nothing
}
}
}

View File

@@ -0,0 +1,114 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
public class BlockPruner implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(BlockPruner.class);
@Override
public void run() {
Thread.currentThread().setName("Block pruner");
boolean archiveMode = false;
if (!Settings.getInstance().isPruningEnabled()) {
// Pruning isn't enabled, but we might want to prune for the purposes of archiving
if (!Settings.getInstance().isArchiveEnabled()) {
// No pruning or archiving, so we must not prune anything
return;
}
else {
// We're allowed to prune blocks that have already been archived
archiveMode = true;
}
}
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
// Don't attempt to prune if we have no ATStatesHeightIndex, as it will be too slow
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
if (!hasAtStatesHeightIndex) {
LOGGER.info("Unable to start block pruner due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
return;
}
while (!Controller.isStopping()) {
repository.discardChanges();
Thread.sleep(Settings.getInstance().getBlockPruneInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null)
continue;
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing()) {
continue;
}
// Don't attempt to prune if we're not synced yet
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null || chainTip.getTimestamp() < minLatestBlockTimestamp) {
continue;
}
// Prune all blocks up until our latest minus pruneBlockLimit
final int ourLatestHeight = chainTip.getHeight();
int upperPrunableHeight = ourLatestHeight - Settings.getInstance().getPruneBlockLimit();
// In archive mode we are only allowed to trim blocks that have already been archived
if (archiveMode) {
upperPrunableHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1;
}
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getBlockPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
if (pruneStartHeight >= upperPruneHeight) {
continue;
}
LOGGER.debug(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
if (numBlocksPruned > 0) {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Pruned %d block%s between %d and %d",
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
finalPruneStartHeight, upperPruneHeight));
} else {
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
repository.getBlockRepository().setBlockPruneHeight(pruneStartHeight);
repository.saveChanges();
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Bumping block base prune height to %d", finalPruneStartHeight));
}
else {
// We've pruned up to the upper prunable height
// Back off for a while to save CPU for syncing
Thread.sleep(10*60*1000L);
}
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to prune blocks: %s", e.getMessage()));
} catch (InterruptedException e) {
// Time to exit
}
}
}

View File

@@ -1,8 +1,9 @@
package org.qortal.controller;
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.BlockChain;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;

View File

@@ -0,0 +1,128 @@
package org.qortal.controller.repository;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.settings.Settings;
import org.qortal.utils.DaemonThreadFactory;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class PruneManager {
private static PruneManager instance;
private boolean pruningEnabled = Settings.getInstance().isPruningEnabled();
private int pruneBlockLimit = Settings.getInstance().getPruneBlockLimit();
private ExecutorService executorService;
private PruneManager() {
}
public static synchronized PruneManager getInstance() {
if (instance == null)
instance = new PruneManager();
return instance;
}
public void start() {
this.executorService = Executors.newCachedThreadPool(new DaemonThreadFactory());
if (Settings.getInstance().isPruningEnabled() &&
!Settings.getInstance().isArchiveEnabled()) {
// Top-only-sync
this.startTopOnlySyncMode();
}
else if (Settings.getInstance().isArchiveEnabled()) {
// Full node with block archive
this.startFullNodeWithBlockArchive();
}
else {
// Full node with full SQL support
this.startFullSQLNode();
}
}
/**
* Top-only-sync
* In this mode, we delete (prune) all blocks except
* a small number of recent ones. There is no need for
* trimming or archiving, because all relevant blocks
* are deleted.
*/
private void startTopOnlySyncMode() {
this.startPruning();
}
/**
* Full node with block archive
* In this mode we archive trimmed blocks, and then
* prune archived blocks to keep the database small
*/
private void startFullNodeWithBlockArchive() {
this.startTrimming();
this.startArchiving();
this.startPruning();
}
/**
* Full node with full SQL support
* In this mode we trim the database but don't prune
* or archive any data, because we want to maintain
* full SQL support of old blocks. This mode will not
* be actively maintained but can be used by those who
* need to perform SQL analysis on older blocks.
*/
private void startFullSQLNode() {
this.startTrimming();
}
private void startPruning() {
this.executorService.execute(new AtStatesPruner());
this.executorService.execute(new BlockPruner());
}
private void startTrimming() {
this.executorService.execute(new AtStatesTrimmer());
this.executorService.execute(new OnlineAccountsSignaturesTrimmer());
}
private void startArchiving() {
this.executorService.execute(new BlockArchiver());
}
public void stop() {
this.executorService.shutdownNow();
try {
this.executorService.awaitTermination(2L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// We tried...
}
}
public boolean isBlockPruned(int height, Repository repository) throws DataException {
if (!this.pruningEnabled) {
return false;
}
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null) {
throw new DataException("Unable to determine chain tip when checking if a block is pruned");
}
final int ourLatestHeight = chainTip.getHeight();
final int latestUnprunedHeight = ourLatestHeight - this.pruneBlockLimit;
return (height < latestUnprunedHeight);
}
}

View File

@@ -0,0 +1,47 @@
package org.qortal.data.block;
import org.qortal.block.Block;
public class BlockArchiveData {
// Properties
private byte[] signature;
private Integer height;
private Long timestamp;
private byte[] minterPublicKey;
// Constructors
public BlockArchiveData(byte[] signature, Integer height, long timestamp, byte[] minterPublicKey) {
this.signature = signature;
this.height = height;
this.timestamp = timestamp;
this.minterPublicKey = minterPublicKey;
}
public BlockArchiveData(BlockData blockData) {
this.signature = blockData.getSignature();
this.height = blockData.getHeight();
this.timestamp = blockData.getTimestamp();
this.minterPublicKey = blockData.getMinterPublicKey();
}
// Getters/setters
public byte[] getSignature() {
return this.signature;
}
public Integer getHeight() {
return this.height;
}
public Long getTimestamp() {
return this.timestamp;
}
public byte[] getMinterPublicKey() {
return this.minterPublicKey;
}
}

View File

@@ -23,7 +23,7 @@ public class CachedBlockMessage extends Message {
this.block = block;
}
private CachedBlockMessage(byte[] cachedBytes) {
public CachedBlockMessage(byte[] cachedBytes) {
super(MessageType.BLOCK);
this.block = null;

View File

@@ -1,5 +1,7 @@
package org.qortal.repository;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.List;
import java.util.Set;
@@ -112,6 +114,14 @@ public interface ATRepository {
*/
public List<ATStateData> getBlockATStatesAtHeight(int height) throws DataException;
/** Rebuild the latest AT states cache, necessary for AT state trimming/pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void rebuildLatestAtStates() throws DataException;
/** Returns height of first trimmable AT state. */
public int getAtTrimHeight() throws DataException;
@@ -121,12 +131,27 @@ public interface ATRepository {
*/
public void setAtTrimHeight(int trimHeight) throws DataException;
/** Hook to allow repository to prepare/cache info for AT state trimming. */
public void prepareForAtStateTrimming() throws DataException;
/** Trims full AT state data between passed heights. Returns number of trimmed rows. */
public int trimAtStates(int minHeight, int maxHeight, int limit) throws DataException;
/** Returns height of first prunable AT state. */
public int getAtPruneHeight() throws DataException;
/** Sets new base height for AT state pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setAtPruneHeight(int pruneHeight) throws DataException;
/** Prunes full AT state data between passed heights. Returns number of pruned rows. */
public int pruneAtStates(int minHeight, int maxHeight) throws DataException;
/** Checks for the presence of the ATStatesHeightIndex in repository */
public boolean hasAtStatesHeightIndex() throws DataException;
/**
* Save ATStateData into repository.
* <p>

View File

@@ -0,0 +1,268 @@
package org.qortal.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Triple;
import static org.qortal.transform.Transformer.INT_LENGTH;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.*;
public class BlockArchiveReader {
private static BlockArchiveReader instance;
private Map<String, Triple<Integer, Integer, Integer>> fileListCache = Collections.synchronizedMap(new HashMap<>());
private static final Logger LOGGER = LogManager.getLogger(BlockArchiveReader.class);
public BlockArchiveReader() {
}
public static synchronized BlockArchiveReader getInstance() {
if (instance == null) {
instance = new BlockArchiveReader();
}
return instance;
}
private void fetchFileList() {
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
File archiveDirFile = archivePath.toFile();
String[] files = archiveDirFile.list();
Map<String, Triple<Integer, Integer, Integer>> map = new HashMap<>();
if (files != null) {
for (String file : files) {
Path filePath = Paths.get(file);
String filename = filePath.getFileName().toString();
// Parse the filename
if (filename == null || !filename.contains("-") || !filename.contains(".")) {
// Not a usable file
continue;
}
// Remove the extension and split into two parts
String[] parts = filename.substring(0, filename.lastIndexOf('.')).split("-");
Integer startHeight = Integer.parseInt(parts[0]);
Integer endHeight = Integer.parseInt(parts[1]);
Integer range = endHeight - startHeight;
map.put(filename, new Triple(startHeight, endHeight, range));
}
}
this.fileListCache = map;
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockAtHeight(int height) {
if (this.fileListCache.isEmpty()) {
this.fetchFileList();
}
byte[] serializedBytes = this.fetchSerializedBlockBytesForHeight(height);
if (serializedBytes == null) {
return null;
}
ByteBuffer byteBuffer = ByteBuffer.wrap(serializedBytes);
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = null;
try {
blockInfo = BlockTransformer.fromByteBuffer(byteBuffer);
if (blockInfo != null && blockInfo.getA() != null) {
// Block height is stored outside of the main serialized bytes, so it
// won't be set automatically.
blockInfo.getA().setHeight(height);
}
} catch (TransformationException e) {
return null;
}
return blockInfo;
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockWithSignature(
byte[] signature, Repository repository) {
if (this.fileListCache.isEmpty()) {
this.fetchFileList();
}
Integer height = this.fetchHeightForSignature(signature, repository);
if (height != null) {
return this.fetchBlockAtHeight(height);
}
return null;
}
public List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> fetchBlocksFromRange(
int startHeight, int endHeight) {
List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> blockInfoList = new ArrayList<>();
for (int height = startHeight; height <= endHeight; height++) {
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = this.fetchBlockAtHeight(height);
if (blockInfo == null) {
return blockInfoList;
}
blockInfoList.add(blockInfo);
}
return blockInfoList;
}
public Integer fetchHeightForSignature(byte[] signature, Repository repository) {
// Lookup the height for the requested signature
try {
BlockArchiveData archivedBlock = repository.getBlockArchiveRepository().getBlockArchiveDataForSignature(signature);
if (archivedBlock.getHeight() == null) {
return null;
}
return archivedBlock.getHeight();
} catch (DataException e) {
return null;
}
}
public int fetchHeightForTimestamp(long timestamp, Repository repository) {
// Lookup the height for the requested signature
try {
return repository.getBlockArchiveRepository().getHeightFromTimestamp(timestamp);
} catch (DataException e) {
return 0;
}
}
private String getFilenameForHeight(int height) {
Iterator it = this.fileListCache.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry)it.next();
if (pair == null && pair.getKey() == null && pair.getValue() == null) {
continue;
}
Triple<Integer, Integer, Integer> heightInfo = (Triple<Integer, Integer, Integer>) pair.getValue();
Integer startHeight = heightInfo.getA();
Integer endHeight = heightInfo.getB();
if (height >= startHeight && height <= endHeight) {
// Found the correct file
String filename = (String) pair.getKey();
return filename;
}
}
return null;
}
public byte[] fetchSerializedBlockBytesForSignature(byte[] signature, Repository repository) {
if (this.fileListCache.isEmpty()) {
this.fetchFileList();
}
Integer height = this.fetchHeightForSignature(signature, repository);
if (height != null) {
return this.fetchSerializedBlockBytesForHeight(height);
}
return null;
}
public byte[] fetchSerializedBlockBytesForHeight(int height) {
String filename = this.getFilenameForHeight(height);
if (filename == null) {
// We don't have this block in the archive
// Invalidate the file list cache in case it is out of date
this.invalidateFileListCache();
return null;
}
Path filePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive", filename).toAbsolutePath();
RandomAccessFile file = null;
try {
file = new RandomAccessFile(filePath.toString(), "r");
// Get info about this file (the "fixed length header")
final int version = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
final int startHeight = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
final int endHeight = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
file.readInt(); // Block count (unused) // Do not remove or comment out, as it is moving the file pointer
final int variableHeaderLength = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
final int fixedHeaderLength = (int)file.getFilePointer();
// End of fixed length header
// Make sure the version is one we recognize
if (version != 1) {
LOGGER.info("Error: unknown version in file {}: {}", filename, version);
return null;
}
// Verify that the block is within the reported range
if (height < startHeight || height > endHeight) {
LOGGER.info("Error: requested height {} but the range of file {} is {}-{}",
height, filename, startHeight, endHeight);
return null;
}
// Seek to the location of the block index in the variable length header
final int locationOfBlockIndexInVariableHeaderSegment = (height - startHeight) * INT_LENGTH;
file.seek(fixedHeaderLength + locationOfBlockIndexInVariableHeaderSegment);
// Read the value to obtain the index of this block in the data segment
int locationOfBlockInDataSegment = file.readInt();
// Now seek to the block data itself
int dataSegmentStartIndex = fixedHeaderLength + variableHeaderLength + INT_LENGTH; // Confirmed correct
file.seek(dataSegmentStartIndex + locationOfBlockInDataSegment);
// Read the block metadata
int blockHeight = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
int blockLength = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
// Ensure the block height matches the one requested
if (blockHeight != height) {
LOGGER.info("Error: height {} does not match requested: {}", blockHeight, height);
return null;
}
// Now retrieve the block's serialized bytes
byte[] blockBytes = new byte[blockLength];
file.read(blockBytes);
return blockBytes;
} catch (FileNotFoundException e) {
LOGGER.info("File {} not found: {}", filename, e.getMessage());
return null;
} catch (IOException e) {
LOGGER.info("Unable to read block {} from archive: {}", height, e.getMessage());
return null;
}
finally {
// Close the file
if (file != null) {
try {
file.close();
} catch (IOException e) {
// Failed to close, but no need to handle this
}
}
}
}
public void invalidateFileListCache() {
this.fileListCache.clear();
}
}

View File

@@ -0,0 +1,130 @@
package org.qortal.repository;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import java.util.List;
public interface BlockArchiveRepository {
/**
* Returns BlockData from archive using block signature.
*
* @param signature
* @return block data, or null if not found in archive.
* @throws DataException
*/
public BlockData fromSignature(byte[] signature) throws DataException;
/**
* Return height of block in archive using block's signature.
*
* @param signature
* @return height, or 0 if not found in blockchain.
* @throws DataException
*/
public int getHeightFromSignature(byte[] signature) throws DataException;
/**
* Returns BlockData from archive using block height.
*
* @param height
* @return block data, or null if not found in blockchain.
* @throws DataException
*/
public BlockData fromHeight(int height) throws DataException;
/**
* Returns a list of BlockData objects from archive using
* block height range.
*
* @param startHeight
* @return a list of BlockData objects, or an empty list if
* not found in blockchain. It is not guaranteed that all
* requested blocks will be returned.
* @throws DataException
*/
public List<BlockData> fromRange(int startHeight, int endHeight) throws DataException;
/**
* Returns BlockData from archive using block reference.
* Currently relies on a child block being the one block
* higher than its parent. This limitation can be removed
* by storing the reference in the BlockArchive table, but
* this has been avoided to reduce space.
*
* @param reference
* @return block data, or null if either parent or child
* not found in the archive.
* @throws DataException
*/
public BlockData fromReference(byte[] reference) throws DataException;
/**
* Return height of block with timestamp just before passed timestamp.
*
* @param timestamp
* @return height, or 0 if not found in blockchain.
* @throws DataException
*/
public int getHeightFromTimestamp(long timestamp) throws DataException;
/**
* Returns block summaries for blocks signed by passed public key, or reward-share with minter with passed public key.
*/
public List<BlockSummaryData> getBlockSummariesBySigner(byte[] signerPublicKey, Integer limit, Integer offset, Boolean reverse) throws DataException;
/**
* Returns summaries of block signers, optionally limited to passed addresses.
* This combines both the BlockArchive and the Blocks data into a single result set.
*/
public List<BlockSignerSummary> getBlockSigners(List<String> addresses, Integer limit, Integer offset, Boolean reverse) throws DataException;
/** Returns height of first unarchived block. */
public int getBlockArchiveHeight() throws DataException;
/** Sets new height for block archiving.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setBlockArchiveHeight(int archiveHeight) throws DataException;
/**
* Returns the block archive data for a given signature, from the block archive.
* <p>
* This method will return null if no block archive has been built for the
* requested signature. In those cases, the height (and other data) can be
* looked up using the Blocks table. This allows a block to be located in
* the archive when we only know its signature.
* <p>
*
* @param signature
* @throws DataException
*/
public BlockArchiveData getBlockArchiveDataForSignature(byte[] signature) throws DataException;
/**
* Saves a block archive entry into the repository.
* <p>
* This can be used to find the height of a block by its signature, without
* having access to the block data itself.
* <p>
*
* @param blockArchiveData
* @throws DataException
*/
public void save(BlockArchiveData blockArchiveData) throws DataException;
/**
* Deletes a block archive entry from the repository.
*
* @param blockArchiveData
* @throws DataException
*/
public void delete(BlockArchiveData blockArchiveData) throws DataException;
}

View File

@@ -0,0 +1,193 @@
package org.qortal.repository;
import com.google.common.primitives.Ints;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.Block;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class BlockArchiveWriter {
public enum BlockArchiveWriteResult {
OK,
STOPPING,
NOT_ENOUGH_BLOCKS,
BLOCK_NOT_FOUND
}
private static final Logger LOGGER = LogManager.getLogger(BlockArchiveWriter.class);
private int startHeight;
private final int endHeight;
private final Repository repository;
private long fileSizeTarget = 100 * 1024 * 1024; // 100MiB
private boolean shouldEnforceFileSizeTarget = true;
private int writtenCount;
private Path outputPath;
public BlockArchiveWriter(int startHeight, int endHeight, Repository repository) {
this.startHeight = startHeight;
this.endHeight = endHeight;
this.repository = repository;
}
public static int getMaxArchiveHeight(Repository repository) throws DataException {
// We must only archive trimmed blocks, or the archive will grow far too large
final int accountSignaturesTrimStartHeight = repository.getBlockRepository().getOnlineAccountsSignaturesTrimHeight();
final int atTrimStartHeight = repository.getATRepository().getAtTrimHeight();
final int trimStartHeight = Math.min(accountSignaturesTrimStartHeight, atTrimStartHeight);
return trimStartHeight - 1; // subtract 1 because these values represent the first _untrimmed_ block
}
public static boolean isArchiverUpToDate(Repository repository) throws DataException {
final int maxArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
final int actualArchiveHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
final float progress = (float)actualArchiveHeight / (float) maxArchiveHeight;
LOGGER.info(String.format("maxArchiveHeight: %d, actualArchiveHeight: %d, progress: %f",
maxArchiveHeight, actualArchiveHeight, progress));
// If archiver is within 90% of the maximum, treat it as up to date
// We need several percent as an allowance because the archiver will only
// save files when they reach the target size
return (progress >= 0.90);
}
public BlockArchiveWriteResult write() throws DataException, IOException, TransformationException, InterruptedException {
// Create the archive folder if it doesn't exist
// This is a subfolder of the db directory, to make bootstrapping easier
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
try {
Files.createDirectories(archivePath);
} catch (IOException e) {
LOGGER.info("Unable to create archive folder");
throw new DataException("Unable to create archive folder");
}
// Determine start height of blocks to fetch
if (startHeight <= 2) {
// Skip genesis block, as it's not designed to be transmitted, and we can build that from blockchain.json
// TODO: include genesis block if we can
startHeight = 2;
}
// Header bytes will store the block indexes
ByteArrayOutputStream headerBytes = new ByteArrayOutputStream();
// Bytes will store the actual block data
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
LOGGER.info(String.format("Fetching blocks from height %d...", startHeight));
int i = 0;
while (headerBytes.size() + bytes.size() < this.fileSizeTarget
|| this.shouldEnforceFileSizeTarget == false) {
if (Controller.isStopping()) {
return BlockArchiveWriteResult.STOPPING;
}
if (Controller.getInstance().isSynchronizing()) {
continue;
}
int currentHeight = startHeight + i;
if (currentHeight > endHeight) {
break;
}
//LOGGER.info("Fetching block {}...", currentHeight);
BlockData blockData = repository.getBlockRepository().fromHeight(currentHeight);
if (blockData == null) {
return BlockArchiveWriteResult.BLOCK_NOT_FOUND;
}
// Write the signature and height into the BlockArchive table
BlockArchiveData blockArchiveData = new BlockArchiveData(blockData);
repository.getBlockArchiveRepository().save(blockArchiveData);
repository.saveChanges();
// Write the block data to some byte buffers
Block block = new Block(repository, blockData);
int blockIndex = bytes.size();
// Write block index to header
headerBytes.write(Ints.toByteArray(blockIndex));
// Write block height
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
byte[] blockBytes = BlockTransformer.toBytes(block);
// Write block length
bytes.write(Ints.toByteArray(blockBytes.length));
// Write block bytes
bytes.write(blockBytes);
i++;
}
int totalLength = headerBytes.size() + bytes.size();
LOGGER.info(String.format("Total length of %d blocks is %d bytes", i, totalLength));
// Validate file size, in case something went wrong
if (totalLength < fileSizeTarget && this.shouldEnforceFileSizeTarget) {
return BlockArchiveWriteResult.NOT_ENOUGH_BLOCKS;
}
// We have enough blocks to create a new file
int endHeight = startHeight + i - 1;
int version = 1;
String filePath = String.format("%s/%d-%d.dat", archivePath.toString(), startHeight, endHeight);
FileOutputStream fileOutputStream = new FileOutputStream(filePath);
// Write version number
fileOutputStream.write(Ints.toByteArray(version));
// Write start height
fileOutputStream.write(Ints.toByteArray(startHeight));
// Write end height
fileOutputStream.write(Ints.toByteArray(endHeight));
// Write total count
fileOutputStream.write(Ints.toByteArray(i));
// Write dynamic header (block indexes) segment length
fileOutputStream.write(Ints.toByteArray(headerBytes.size()));
// Write dynamic header (block indexes) data
headerBytes.writeTo(fileOutputStream);
// Write data segment (block data) length
fileOutputStream.write(Ints.toByteArray(bytes.size()));
// Write data
bytes.writeTo(fileOutputStream);
// Close the file
fileOutputStream.close();
// Invalidate cache so that the rest of the app picks up the new file
BlockArchiveReader.getInstance().invalidateFileListCache();
this.writtenCount = i;
this.outputPath = Paths.get(filePath);
return BlockArchiveWriteResult.OK;
}
public int getWrittenCount() {
return this.writtenCount;
}
public Path getOutputPath() {
return this.outputPath;
}
public void setFileSizeTarget(long fileSizeTarget) {
this.fileSizeTarget = fileSizeTarget;
}
// For testing, to avoid having to pre-calculate file sizes
public void setShouldEnforceFileSizeTarget(boolean shouldEnforceFileSizeTarget) {
this.shouldEnforceFileSizeTarget = shouldEnforceFileSizeTarget;
}
}

View File

@@ -137,11 +137,6 @@ public interface BlockRepository {
*/
public List<BlockSummaryData> getBlockSummaries(int firstBlockHeight, int lastBlockHeight) throws DataException;
/**
* Returns block summaries for the passed height range, for API use.
*/
public List<BlockSummaryData> getBlockSummaries(Integer startHeight, Integer endHeight, Integer count) throws DataException;
/** Returns height of first trimmable online accounts signatures. */
public int getOnlineAccountsSignaturesTrimHeight() throws DataException;
@@ -166,6 +161,20 @@ public interface BlockRepository {
*/
public BlockData getDetachedBlockSignature(int startHeight) throws DataException;
/** Returns height of first prunable block. */
public int getBlockPruneHeight() throws DataException;
/** Sets new base height for block pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setBlockPruneHeight(int pruneHeight) throws DataException;
/** Prunes full block data between passed heights. Returns number of pruned rows. */
public int pruneBlocks(int minHeight, int maxHeight) throws DataException;
/**
* Saves block into repository.
*

View File

@@ -12,6 +12,8 @@ public interface Repository extends AutoCloseable {
public BlockRepository getBlockRepository();
public BlockArchiveRepository getBlockArchiveRepository();
public ChatRepository getChatRepository();
public CrossChainRepository getCrossChainRepository();

View File

@@ -1,8 +1,15 @@
package org.qortal.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.repository.hsqldb.HSQLDBDatabaseArchiving;
import org.qortal.repository.hsqldb.HSQLDBDatabasePruning;
import org.qortal.settings.Settings;
import java.sql.SQLException;
public abstract class RepositoryManager {
private static final Logger LOGGER = LogManager.getLogger(RepositoryManager.class);
private static RepositoryFactory repositoryFactory = null;
@@ -51,6 +58,50 @@ public abstract class RepositoryManager {
}
}
public static boolean archive() {
// Bulk archive the database the first time we use archive mode
if (Settings.getInstance().isArchiveEnabled()) {
if (RepositoryManager.canArchiveOrPrune()) {
try {
return HSQLDBDatabaseArchiving.buildBlockArchive();
} catch (DataException e) {
LOGGER.info("Unable to build block archive. The database may have been left in an inconsistent state.");
}
}
else {
LOGGER.info("Unable to build block archive due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
}
}
return false;
}
public static boolean prune() {
// Bulk prune the database the first time we use pruning mode
if (Settings.getInstance().isPruningEnabled() ||
Settings.getInstance().isArchiveEnabled()) {
if (RepositoryManager.canArchiveOrPrune()) {
try {
boolean prunedATStates = HSQLDBDatabasePruning.pruneATStates();
boolean prunedBlocks = HSQLDBDatabasePruning.pruneBlocks();
// Perform repository maintenance to shrink the db size down
if (prunedATStates && prunedBlocks) {
HSQLDBDatabasePruning.performMaintenance();
return true;
}
} catch (SQLException | DataException e) {
LOGGER.info("Unable to bulk prune AT states. The database may have been left in an inconsistent state.");
}
}
else {
LOGGER.info("Unable to prune blocks due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
}
}
return false;
}
public static void setRequestedCheckpoint(Boolean quick) {
quickCheckpointRequested = quick;
}
@@ -77,4 +128,12 @@ public abstract class RepositoryManager {
return SQLException.class.isInstance(cause) && repositoryFactory.isDeadlockException((SQLException) cause);
}
public static boolean canArchiveOrPrune() {
try (final Repository repository = getRepository()) {
return repository.getATRepository().hasAtStatesHeightIndex();
} catch (DataException e) {
return false;
}
}
}

View File

@@ -8,6 +8,7 @@ import java.util.Set;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.at.ATData;
import org.qortal.data.at.ATStateData;
import org.qortal.repository.ATRepository;
@@ -600,6 +601,44 @@ public class HSQLDBATRepository implements ATRepository {
return atStates;
}
@Override
public void rebuildLatestAtStates() throws DataException {
// latestATStatesLock is to prevent concurrent updates on LatestATStates
// that could result in one process using a partial or empty dataset
// because it was in the process of being rebuilt by another thread
synchronized (this.repository.latestATStatesLock) {
LOGGER.trace("Rebuilding latest AT states...");
// Rebuild cache of latest AT states that we can't trim
String deleteSql = "DELETE FROM LatestATStates";
try {
this.repository.executeCheckedUpdate(deleteSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to delete temporary latest AT states cache from repository", e);
}
String insertSql = "INSERT INTO LatestATStates ("
+ "SELECT AT_address, height FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height FROM ATStates "
+ "WHERE ATStates.AT_address = ATs.AT_address "
+ "ORDER BY AT_address DESC, height DESC LIMIT 1"
+ ") "
+ ")";
try {
this.repository.executeCheckedUpdate(insertSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to populate temporary latest AT states cache in repository", e);
}
this.repository.saveChanges();
LOGGER.trace("Rebuilt latest AT states");
}
}
@Override
public int getAtTrimHeight() throws DataException {
String sql = "SELECT AT_trim_height FROM DatabaseInfo";
@@ -625,63 +664,153 @@ public class HSQLDBATRepository implements ATRepository {
this.repository.executeCheckedUpdate(updateSql, trimHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
this.repository.examineException(e);
throw new DataException("Unable to set AT state trim height in repository", e);
}
}
}
@Override
public void prepareForAtStateTrimming() throws DataException {
// Rebuild cache of latest AT states that we can't trim
String deleteSql = "DELETE FROM LatestATStates";
try {
this.repository.executeCheckedUpdate(deleteSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to delete temporary latest AT states cache from repository", e);
}
String insertSql = "INSERT INTO LatestATStates ("
+ "SELECT AT_address, height FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height FROM ATStates "
+ "WHERE ATStates.AT_address = ATs.AT_address "
+ "ORDER BY AT_address DESC, height DESC LIMIT 1"
+ ") "
+ ")";
try {
this.repository.executeCheckedUpdate(insertSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to populate temporary latest AT states cache in repository", e);
}
}
@Override
public int trimAtStates(int minHeight, int maxHeight, int limit) throws DataException {
if (minHeight >= maxHeight)
return 0;
// We're often called so no need to trim all states in one go.
// Limit updates to reduce CPU and memory load.
String sql = "DELETE FROM ATStatesData "
+ "WHERE height BETWEEN ? AND ? "
+ "AND NOT EXISTS("
// latestATStatesLock is to prevent concurrent updates on LatestATStates
// that could result in one process using a partial or empty dataset
// because it was in the process of being rebuilt by another thread
synchronized (this.repository.latestATStatesLock) {
// We're often called so no need to trim all states in one go.
// Limit updates to reduce CPU and memory load.
String sql = "DELETE FROM ATStatesData "
+ "WHERE height BETWEEN ? AND ? "
+ "AND NOT EXISTS("
+ "SELECT TRUE FROM LatestATStates "
+ "WHERE LatestATStates.AT_address = ATStatesData.AT_address "
+ "AND LatestATStates.height = ATStatesData.height"
+ ") "
+ "LIMIT ?";
+ ") "
+ "LIMIT ?";
try {
return this.repository.executeCheckedUpdate(sql, minHeight, maxHeight, limit);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to trim AT states in repository", e);
try {
int modifiedRows = this.repository.executeCheckedUpdate(sql, minHeight, maxHeight, limit);
this.repository.saveChanges();
return modifiedRows;
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to trim AT states in repository", e);
}
}
}
@Override
public int getAtPruneHeight() throws DataException {
String sql = "SELECT AT_prune_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch AT state prune height from repository", e);
}
}
@Override
public void setAtPruneHeight(int pruneHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET AT_prune_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, pruneHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set AT state prune height in repository", e);
}
}
}
@Override
public int pruneAtStates(int minHeight, int maxHeight) throws DataException {
// latestATStatesLock is to prevent concurrent updates on LatestATStates
// that could result in one process using a partial or empty dataset
// because it was in the process of being rebuilt by another thread
synchronized (this.repository.latestATStatesLock) {
int deletedCount = 0;
for (int height = minHeight; height <= maxHeight; height++) {
// Give up if we're stopping
if (Controller.isStopping()) {
return deletedCount;
}
// Get latest AT states for this height
List<String> atAddresses = new ArrayList<>();
String updateSql = "SELECT AT_address FROM LatestATStates WHERE height = ?";
try (ResultSet resultSet = this.repository.checkedExecute(updateSql, height)) {
if (resultSet != null) {
do {
String atAddress = resultSet.getString(1);
atAddresses.add(atAddress);
} while (resultSet.next());
}
} catch (SQLException e) {
throw new DataException("Unable to fetch latest AT states from repository", e);
}
List<ATStateData> atStates = this.getBlockATStatesAtHeight(height);
for (ATStateData atState : atStates) {
//LOGGER.info("Found atState {} at height {}", atState.getATAddress(), atState.getHeight());
// Give up if we're stopping
if (Controller.isStopping()) {
return deletedCount;
}
if (atAddresses.contains(atState.getATAddress())) {
// We don't want to delete this AT state because it is still active
LOGGER.info("Skipping atState {} at height {}", atState.getATAddress(), atState.getHeight());
continue;
}
// Safe to delete everything else for this height
try {
this.repository.delete("ATStates", "AT_address = ? AND height = ?",
atState.getATAddress(), atState.getHeight());
deletedCount++;
} catch (SQLException e) {
throw new DataException("Unable to delete AT state data from repository", e);
}
}
}
this.repository.saveChanges();
return deletedCount;
}
}
@Override
public boolean hasAtStatesHeightIndex() throws DataException {
String sql = "SELECT INDEX_NAME FROM INFORMATION_SCHEMA.SYSTEM_INDEXINFO where INDEX_NAME='ATSTATESHEIGHTINDEX'";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
return resultSet != null;
} catch (SQLException e) {
throw new DataException("Unable to check for ATStatesHeightIndex in repository", e);
}
}
@Override
public void save(ATStateData atStateData) throws DataException {
// We shouldn't ever save partial ATStateData

View File

@@ -0,0 +1,292 @@
package org.qortal.repository.hsqldb;
import org.qortal.api.ApiError;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.block.Block;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.BlockArchiveRepository;
import org.qortal.repository.DataException;
import org.qortal.utils.Triple;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class HSQLDBBlockArchiveRepository implements BlockArchiveRepository {
protected HSQLDBRepository repository;
public HSQLDBBlockArchiveRepository(HSQLDBRepository repository) {
this.repository = repository;
}
@Override
public BlockData fromSignature(byte[] signature) throws DataException {
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockWithSignature(signature, this.repository);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
return null;
}
@Override
public int getHeightFromSignature(byte[] signature) throws DataException {
Integer height = BlockArchiveReader.getInstance().fetchHeightForSignature(signature, this.repository);
if (height == null || height == 0) {
return 0;
}
return height;
}
@Override
public BlockData fromHeight(int height) throws DataException {
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
return null;
}
@Override
public List<BlockData> fromRange(int startHeight, int endHeight) throws DataException {
List<BlockData> blocks = new ArrayList<>();
for (int height = startHeight; height < endHeight; height++) {
BlockData blockData = this.fromHeight(height);
if (blockData == null) {
return blocks;
}
blocks.add(blockData);
}
return blocks;
}
@Override
public BlockData fromReference(byte[] reference) throws DataException {
BlockData referenceBlock = this.repository.getBlockArchiveRepository().fromSignature(reference);
if (referenceBlock != null) {
int height = referenceBlock.getHeight();
if (height > 0) {
// Request the block at height + 1
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height + 1);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
}
}
return null;
}
@Override
public int getHeightFromTimestamp(long timestamp) throws DataException {
String sql = "SELECT height FROM BlockArchive WHERE minted_when <= ? ORDER BY minted_when DESC, height DESC LIMIT 1";
try (ResultSet resultSet = this.repository.checkedExecute(sql, timestamp)) {
if (resultSet == null) {
return 0;
}
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Error fetching height from BlockArchive repository", e);
}
}
@Override
public List<BlockSummaryData> getBlockSummariesBySigner(byte[] signerPublicKey, Integer limit, Integer offset, Boolean reverse) throws DataException {
StringBuilder sql = new StringBuilder(512);
sql.append("SELECT signature, height, BlockArchive.minter FROM ");
// List of minter account's public key and reward-share public keys with minter's public key
sql.append("(SELECT * FROM (VALUES (CAST(? AS QortalPublicKey))) UNION (SELECT reward_share_public_key FROM RewardShares WHERE minter_public_key = ?)) AS PublicKeys (public_key) ");
// Match BlockArchive blocks signed with public key from above list
sql.append("JOIN BlockArchive ON BlockArchive.minter = public_key ");
sql.append("ORDER BY BlockArchive.height ");
if (reverse != null && reverse)
sql.append("DESC ");
HSQLDBRepository.limitOffsetSql(sql, limit, offset);
List<BlockSummaryData> blockSummaries = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql.toString(), signerPublicKey, signerPublicKey)) {
if (resultSet == null)
return blockSummaries;
do {
byte[] signature = resultSet.getBytes(1);
int height = resultSet.getInt(2);
byte[] blockMinterPublicKey = resultSet.getBytes(3);
// Fetch additional info from the archive itself
int onlineAccountsCount = 0;
BlockData blockData = this.fromSignature(signature);
if (blockData != null) {
onlineAccountsCount = blockData.getOnlineAccountsCount();
}
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, blockMinterPublicKey, onlineAccountsCount);
blockSummaries.add(blockSummary);
} while (resultSet.next());
return blockSummaries;
} catch (SQLException e) {
throw new DataException("Unable to fetch minter's block summaries from repository", e);
}
}
@Override
public List<BlockSignerSummary> getBlockSigners(List<String> addresses, Integer limit, Integer offset, Boolean reverse) throws DataException {
String subquerySql = "SELECT minter, COUNT(signature) FROM (" +
"(SELECT minter, signature FROM Blocks) UNION ALL (SELECT minter, signature FROM BlockArchive)" +
") GROUP BY minter";
StringBuilder sql = new StringBuilder(1024);
sql.append("SELECT DISTINCT block_minter, n_blocks, minter_public_key, minter, recipient FROM (");
sql.append(subquerySql);
sql.append(") AS Minters (block_minter, n_blocks) LEFT OUTER JOIN RewardShares ON reward_share_public_key = block_minter ");
if (addresses != null && !addresses.isEmpty()) {
sql.append(" LEFT OUTER JOIN Accounts AS BlockMinterAccounts ON BlockMinterAccounts.public_key = block_minter ");
sql.append(" LEFT OUTER JOIN Accounts AS RewardShareMinterAccounts ON RewardShareMinterAccounts.public_key = minter_public_key ");
sql.append(" JOIN (VALUES ");
final int addressesSize = addresses.size();
for (int ai = 0; ai < addressesSize; ++ai) {
if (ai != 0)
sql.append(", ");
sql.append("(?)");
}
sql.append(") AS FilterAccounts (account) ");
sql.append(" ON FilterAccounts.account IN (recipient, BlockMinterAccounts.account, RewardShareMinterAccounts.account) ");
} else {
addresses = Collections.emptyList();
}
sql.append("ORDER BY n_blocks ");
if (reverse != null && reverse)
sql.append("DESC ");
HSQLDBRepository.limitOffsetSql(sql, limit, offset);
List<BlockSignerSummary> summaries = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql.toString(), addresses.toArray())) {
if (resultSet == null)
return summaries;
do {
byte[] blockMinterPublicKey = resultSet.getBytes(1);
int nBlocks = resultSet.getInt(2);
// May not be present if no reward-share:
byte[] mintingAccountPublicKey = resultSet.getBytes(3);
String minterAccount = resultSet.getString(4);
String recipientAccount = resultSet.getString(5);
BlockSignerSummary blockSignerSummary;
if (recipientAccount == null)
blockSignerSummary = new BlockSignerSummary(blockMinterPublicKey, nBlocks);
else
blockSignerSummary = new BlockSignerSummary(blockMinterPublicKey, nBlocks, mintingAccountPublicKey, minterAccount, recipientAccount);
summaries.add(blockSignerSummary);
} while (resultSet.next());
return summaries;
} catch (SQLException e) {
throw new DataException("Unable to fetch block minters from repository", e);
}
}
@Override
public int getBlockArchiveHeight() throws DataException {
String sql = "SELECT block_archive_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch block archive height from repository", e);
}
}
@Override
public void setBlockArchiveHeight(int archiveHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET block_archive_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, archiveHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set block archive height in repository", e);
}
}
}
@Override
public BlockArchiveData getBlockArchiveDataForSignature(byte[] signature) throws DataException {
String sql = "SELECT height, signature, minted_when, minter FROM BlockArchive WHERE signature = ? LIMIT 1";
try (ResultSet resultSet = this.repository.checkedExecute(sql, signature)) {
if (resultSet == null) {
return null;
}
int height = resultSet.getInt(1);
byte[] sig = resultSet.getBytes(2);
long timestamp = resultSet.getLong(3);
byte[] minterPublicKey = resultSet.getBytes(4);
return new BlockArchiveData(sig, height, timestamp, minterPublicKey);
} catch (SQLException e) {
throw new DataException("Error fetching height from BlockArchive repository", e);
}
}
@Override
public void save(BlockArchiveData blockArchiveData) throws DataException {
HSQLDBSaver saveHelper = new HSQLDBSaver("BlockArchive");
saveHelper.bind("signature", blockArchiveData.getSignature())
.bind("height", blockArchiveData.getHeight())
.bind("minted_when", blockArchiveData.getTimestamp())
.bind("minter", blockArchiveData.getMinterPublicKey());
try {
saveHelper.execute(this.repository);
} catch (SQLException e) {
throw new DataException("Unable to save SimpleBlockData into BlockArchive repository", e);
}
}
@Override
public void delete(BlockArchiveData blockArchiveData) throws DataException {
try {
this.repository.delete("BlockArchive",
"block_signature = ?", blockArchiveData.getSignature());
} catch (SQLException e) {
throw new DataException("Unable to delete SimpleBlockData from BlockArchive repository", e);
}
}
}

View File

@@ -10,6 +10,7 @@ import org.qortal.api.model.BlockSignerSummary;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.BlockTransactionData;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockRepository;
import org.qortal.repository.DataException;
@@ -382,86 +383,6 @@ public class HSQLDBBlockRepository implements BlockRepository {
}
}
@Override
public List<BlockSummaryData> getBlockSummaries(Integer startHeight, Integer endHeight, Integer count) throws DataException {
StringBuilder sql = new StringBuilder(512);
List<Object> bindParams = new ArrayList<>();
sql.append("SELECT signature, height, minter, online_accounts_count, minted_when, transaction_count ");
/*
* start end count result
* 10 40 null blocks 10 to 39 (excludes end block, ignore count)
*
* null null null blocks 1 to 50 (assume count=50, maybe start=1)
* 30 null null blocks 30 to 79 (assume count=50)
* 30 null 10 blocks 30 to 39
*
* null null 50 last 50 blocks? so if max(blocks.height) is 200, then blocks 151 to 200
* null 200 null blocks 150 to 199 (excludes end block, assume count=50)
* null 200 10 blocks 190 to 199 (excludes end block)
*/
if (startHeight != null && endHeight != null) {
sql.append("FROM Blocks ");
sql.append("WHERE height BETWEEN ? AND ?");
bindParams.add(startHeight);
bindParams.add(Integer.valueOf(endHeight - 1));
} else if (endHeight != null || (startHeight == null && count != null)) {
// we are going to return blocks from the end of the chain
if (count == null)
count = 50;
if (endHeight == null) {
sql.append("FROM (SELECT height FROM Blocks ORDER BY height DESC LIMIT 1) AS MaxHeights (max_height) ");
sql.append("JOIN Blocks ON height BETWEEN (max_height - ? + 1) AND max_height ");
bindParams.add(count);
} else {
sql.append("FROM Blocks ");
sql.append("WHERE height BETWEEN ? AND ?");
bindParams.add(Integer.valueOf(endHeight - count));
bindParams.add(Integer.valueOf(endHeight - 1));
}
} else {
// we are going to return blocks from the start of the chain
if (startHeight == null)
startHeight = 1;
if (count == null)
count = 50;
sql.append("FROM Blocks ");
sql.append("WHERE height BETWEEN ? AND ?");
bindParams.add(startHeight);
bindParams.add(Integer.valueOf(startHeight + count - 1));
}
List<BlockSummaryData> blockSummaries = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql.toString(), bindParams.toArray())) {
if (resultSet == null)
return blockSummaries;
do {
byte[] signature = resultSet.getBytes(1);
int height = resultSet.getInt(2);
byte[] minterPublicKey = resultSet.getBytes(3);
int onlineAccountsCount = resultSet.getInt(4);
long timestamp = resultSet.getLong(5);
int transactionCount = resultSet.getInt(6);
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, minterPublicKey, onlineAccountsCount,
timestamp, transactionCount);
blockSummaries.add(blockSummary);
} while (resultSet.next());
return blockSummaries;
} catch (SQLException e) {
throw new DataException("Unable to fetch height-ranged block summaries from repository", e);
}
}
@Override
public int getOnlineAccountsSignaturesTrimHeight() throws DataException {
String sql = "SELECT online_signatures_trim_height FROM DatabaseInfo";
@@ -509,6 +430,53 @@ public class HSQLDBBlockRepository implements BlockRepository {
}
}
@Override
public int getBlockPruneHeight() throws DataException {
String sql = "SELECT block_prune_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch block prune height from repository", e);
}
}
@Override
public void setBlockPruneHeight(int pruneHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET block_prune_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, pruneHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set block prune height in repository", e);
}
}
}
@Override
public int pruneBlocks(int minHeight, int maxHeight) throws DataException {
// Don't prune the genesis block
if (minHeight <= 1) {
minHeight = 2;
}
try {
return this.repository.delete("Blocks", "height BETWEEN ? AND ?", minHeight, maxHeight);
} catch (SQLException e) {
throw new DataException("Unable to prune blocks from repository", e);
}
}
@Override
public BlockData getDetachedBlockSignature(int startHeight) throws DataException {
String sql = "SELECT " + BLOCK_DB_COLUMNS + " FROM Blocks "

View File

@@ -0,0 +1,86 @@
package org.qortal.repository.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.repository.BlockArchiveWriter;
import org.qortal.repository.DataException;
import org.qortal.repository.RepositoryManager;
import org.qortal.transform.TransformationException;
import java.io.IOException;
/**
*
* When switching to an archiving node, we need to archive most of the database contents.
* This involves copying its data into flat files.
* If we do this entirely as a background process, it is very slow and can interfere with syncing.
* However, if we take the approach of doing this in bulk, before starting up the rest of the
* processes, this makes it much faster and less invasive.
*
* From that point, the original background archiving process will run, but can be dialled right down
* so not to interfere with syncing.
*
*/
public class HSQLDBDatabaseArchiving {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBDatabaseArchiving.class);
public static boolean buildBlockArchive() throws DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository)RepositoryManager.getRepository()) {
// Only build the archive if we have never done so before
int archiveHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
if (archiveHeight > 0) {
// Already archived
return false;
}
LOGGER.info("Building block archive - this process could take a while... (approx. 15 mins on high spec)");
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
int startHeight = 0;
while (!Controller.isStopping()) {
try {
BlockArchiveWriter writer = new BlockArchiveWriter(startHeight, maximumArchiveHeight, repository);
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
switch (result) {
case OK:
// Increment block archive height
startHeight += writer.getWrittenCount();
repository.getBlockArchiveRepository().setBlockArchiveHeight(startHeight);
repository.saveChanges();
break;
case STOPPING:
return false;
case NOT_ENOUGH_BLOCKS:
// We've reached the limit of the blocks we can archive
// Return from the whole method
return true;
case BLOCK_NOT_FOUND:
// We tried to archive a block that didn't exist. This is a major failure and likely means
// that a bootstrap or re-sync is needed. Return rom the method
LOGGER.info("Error: block not found when building archive. If this error persists, " +
"a bootstrap or re-sync may be needed.");
return false;
}
} catch (IOException | TransformationException | InterruptedException e) {
LOGGER.info("Caught exception when creating block cache", e);
return false;
}
}
}
// If we got this far then something went wrong (most likely the app is stopping)
return false;
}
}

View File

@@ -0,0 +1,323 @@
package org.qortal.repository.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.BlockArchiveWriter;
import org.qortal.repository.DataException;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import java.sql.ResultSet;
import java.sql.SQLException;
/**
*
* When switching from a full node to a pruning node, we need to delete most of the database contents.
* If we do this entirely as a background process, it is very slow and can interfere with syncing.
* However, if we take the approach of transferring only the necessary rows to a new table and then
* deleting the original table, this makes the process much faster. It was taking several days to
* delete the AT states in the background, but only a couple of minutes to copy them to a new table.
*
* The trade off is that we have to go through a form of "reshape" when starting the app for the first
* time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be
* a problem.
*
* Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to
* shrink the database file size down to a fraction of what it was before.
*
* From this point, the original background process will run, but can be dialled right down so not
* to interfere with syncing.
*
*/
public class HSQLDBDatabasePruning {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBDatabasePruning.class);
public static boolean pruneATStates() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository)RepositoryManager.getRepository()) {
// Only bulk prune AT states if we have never done so before
int pruneHeight = repository.getATRepository().getAtPruneHeight();
if (pruneHeight > 0) {
// Already pruned AT states
return false;
}
if (Settings.getInstance().isArchiveEnabled()) {
// Only proceed if we can see that the archiver has already finished
// This way, if the archiver failed for any reason, we can prune once it has had
// some opportunities to try again
boolean upToDate = BlockArchiveWriter.isArchiverUpToDate(repository);
if (!upToDate) {
return false;
}
}
LOGGER.info("Starting bulk prune of AT states - this process could take a while... " +
"(approx. 2 mins on high spec, or upwards of 30 mins in some cases)");
// Create new AT-states table to hold smaller dataset
repository.executeCheckedUpdate("DROP TABLE IF EXISTS ATStatesNew");
repository.executeCheckedUpdate("CREATE TABLE ATStatesNew ("
+ "AT_address QortalAddress, height INTEGER NOT NULL, state_hash ATStateHash NOT NULL, "
+ "fees QortalAmount NOT NULL, is_initial BOOLEAN NOT NULL, sleep_until_message_timestamp BIGINT, "
+ "PRIMARY KEY (AT_address, height), "
+ "FOREIGN KEY (AT_address) REFERENCES ATs (AT_address) ON DELETE CASCADE)");
repository.executeCheckedUpdate("SET TABLE ATStatesNew NEW SPACE");
repository.executeCheckedUpdate("CHECKPOINT");
// Add a height index
LOGGER.info("Adding index to AT states table...");
repository.executeCheckedUpdate("CREATE INDEX IF NOT EXISTS ATStatesNewHeightIndex ON ATStatesNew (height)");
repository.executeCheckedUpdate("CHECKPOINT");
// Find our latest block
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk block pruning");
return false;
}
// Calculate some constants for later use
final int blockchainHeight = latestBlock.getHeight();
int maximumBlockToTrim = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
if (Settings.getInstance().isArchiveEnabled()) {
// Archive mode - don't prune anything that hasn't been archived yet
maximumBlockToTrim = Math.min(maximumBlockToTrim, repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1);
}
final int startHeight = maximumBlockToTrim;
final int endHeight = blockchainHeight;
final int blockStep = 10000;
// It's essential that we rebuild the latest AT states here, as we are using this data in the next query.
// Failing to do this will result in important AT states being deleted, rendering the database unusable.
repository.getATRepository().rebuildLatestAtStates();
// Loop through all the LatestATStates and copy them to the new table
LOGGER.info("Copying AT states...");
for (int height = 0; height < endHeight; height += blockStep) {
//LOGGER.info(String.format("Copying AT states between %d and %d...", height, height + blockStep - 1));
String sql = "SELECT height, AT_address FROM LatestATStates WHERE height BETWEEN ? AND ?";
try (ResultSet latestAtStatesResultSet = repository.checkedExecute(sql, height, height + blockStep - 1)) {
if (latestAtStatesResultSet != null) {
do {
int latestAtHeight = latestAtStatesResultSet.getInt(1);
String latestAtAddress = latestAtStatesResultSet.getString(2);
// Copy this latest ATState to the new table
//LOGGER.info(String.format("Copying AT %s at height %d...", latestAtAddress, latestAtHeight));
try {
String updateSql = "INSERT INTO ATStatesNew ("
+ "SELECT AT_address, height, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "WHERE height = ? AND AT_address = ?)";
repository.executeCheckedUpdate(updateSql, latestAtHeight, latestAtAddress);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to copy ATStates", e);
}
if (height >= startHeight) {
// Now copy this AT's states for each recent block they is present in
for (int i = startHeight; i < endHeight; i++) {
if (latestAtHeight < i) {
// This AT finished before this block so there is nothing to copy
continue;
}
//LOGGER.info(String.format("Copying recent AT %s at height %d...", latestAtAddress, i));
try {
// Copy each LatestATState to the new table
String updateSql = "INSERT IGNORE INTO ATStatesNew ("
+ "SELECT AT_address, height, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "WHERE height = ? AND AT_address = ?)";
repository.executeCheckedUpdate(updateSql, i, latestAtAddress);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to copy ATStates", e);
}
}
}
} while (latestAtStatesResultSet.next());
}
} catch (SQLException e) {
throw new DataException("Unable to copy AT states", e);
}
}
repository.saveChanges();
// Finally, drop the original table and rename
LOGGER.info("Deleting old AT states...");
repository.executeCheckedUpdate("DROP TABLE ATStates");
repository.executeCheckedUpdate("ALTER TABLE ATStatesNew RENAME TO ATStates");
repository.executeCheckedUpdate("ALTER INDEX ATStatesNewHeightIndex RENAME TO ATStatesHeightIndex");
repository.executeCheckedUpdate("CHECKPOINT");
// Update the prune height
repository.getATRepository().setAtPruneHeight(maximumBlockToTrim);
repository.saveChanges();
repository.executeCheckedUpdate("CHECKPOINT");
// Now prune/trim the ATStatesData, as this currently goes back over a month
return HSQLDBDatabasePruning.pruneATStateData();
}
}
/*
* Bulk prune ATStatesData to catch up with the now pruned ATStates table
* This uses the existing AT States trimming code but with a much higher end block
*/
private static boolean pruneATStateData() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
if (Settings.getInstance().isArchiveEnabled()) {
// Don't prune ATStatesData in archive mode
return true;
}
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk ATStatesData pruning");
return false;
}
final int blockchainHeight = latestBlock.getHeight();
int upperPrunableHeight = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
// ATStateData is already trimmed - so carry on from where we left off in the past
int pruneStartHeight = repository.getATRepository().getAtTrimHeight();
LOGGER.info("Starting bulk prune of AT states data - this process could take a while... (approx. 3 mins on high spec)");
while (pruneStartHeight < upperPrunableHeight) {
// Prune all AT state data up until our latest minus pruneBlockLimit (or our archive height)
if (Controller.isStopping()) {
return false;
}
// Override batch size in the settings because this is a one-off process
final int batchSize = 1000;
final int rowLimitPerBatch = 50000;
int upperBatchHeight = pruneStartHeight + batchSize;
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
LOGGER.trace(String.format("Pruning AT states data between %d and %d...", pruneStartHeight, upperPruneHeight));
int numATStatesPruned = repository.getATRepository().trimAtStates(pruneStartHeight, upperPruneHeight, rowLimitPerBatch);
repository.saveChanges();
if (numATStatesPruned > 0) {
LOGGER.trace(String.format("Pruned %d AT states data rows between blocks %d and %d",
numATStatesPruned, pruneStartHeight, upperPruneHeight));
} else {
repository.getATRepository().setAtTrimHeight(upperBatchHeight);
// No need to rebuild the latest AT states as we aren't currently synchronizing
repository.saveChanges();
LOGGER.debug(String.format("Bumping AT states trim height to %d", upperBatchHeight));
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
}
else {
// We've finished pruning
break;
}
}
}
return true;
}
}
public static boolean pruneBlocks() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
// Only bulk prune AT states if we have never done so before
int pruneHeight = repository.getBlockRepository().getBlockPruneHeight();
if (pruneHeight > 0) {
// Already pruned blocks
return false;
}
if (Settings.getInstance().isArchiveEnabled()) {
// Only proceed if we can see that the archiver has already finished
// This way, if the archiver failed for any reason, we can prune once it has had
// some opportunities to try again
boolean upToDate = BlockArchiveWriter.isArchiverUpToDate(repository);
if (!upToDate) {
return false;
}
}
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk block pruning");
return false;
}
final int blockchainHeight = latestBlock.getHeight();
int upperPrunableHeight = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
int pruneStartHeight = 0;
if (Settings.getInstance().isArchiveEnabled()) {
// Archive mode - don't prune anything that hasn't been archived yet
upperPrunableHeight = Math.min(upperPrunableHeight, repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1);
}
LOGGER.info("Starting bulk prune of blocks - this process could take a while... (approx. 5 mins on high spec)");
while (pruneStartHeight < upperPrunableHeight) {
// Prune all blocks up until our latest minus pruneBlockLimit
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getBlockPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
LOGGER.info(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
if (numBlocksPruned > 0) {
LOGGER.info(String.format("Pruned %d block%s between %d and %d",
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
pruneStartHeight, upperPruneHeight));
} else {
repository.getBlockRepository().setBlockPruneHeight(upperBatchHeight);
repository.saveChanges();
LOGGER.debug(String.format("Bumping block base prune height to %d", upperBatchHeight));
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
}
else {
// We've finished pruning
break;
}
}
}
return true;
}
}
public static void performMaintenance() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
repository.performPeriodicMaintenance();
}
}
}

View File

@@ -867,6 +867,30 @@ public class HSQLDBDatabaseUpdates {
stmt.execute("CHECKPOINT");
break;
}
case 35:
// Support for pruning
stmt.execute("ALTER TABLE DatabaseInfo ADD AT_prune_height INT NOT NULL DEFAULT 0");
stmt.execute("ALTER TABLE DatabaseInfo ADD block_prune_height INT NOT NULL DEFAULT 0");
break;
case 36:
// Block archive support
stmt.execute("ALTER TABLE DatabaseInfo ADD block_archive_height INT NOT NULL DEFAULT 0");
// Block archive (lookup table to map signature to height)
// Actual data is stored in archive files outside of the database
stmt.execute("CREATE TABLE BlockArchive (signature BlockSignature, height INTEGER NOT NULL, "
+ "minted_when EpochMillis NOT NULL, minter QortalPublicKey NOT NULL, "
+ "PRIMARY KEY (signature))");
// For finding blocks by height.
stmt.execute("CREATE INDEX BlockArchiveHeightIndex ON BlockArchive (height)");
// For finding blocks by the account that minted them.
stmt.execute("CREATE INDEX BlockArchiveMinterIndex ON BlockArchive (minter)");
// For finding blocks by timestamp or finding height of latest block immediately before timestamp, etc.
stmt.execute("CREATE INDEX BlockArchiveTimestampHeightIndex ON BlockArchive (minted_when, height)");
// Use a separate table space as this table will be very large.
stmt.execute("SET TABLE BlockArchive NEW SPACE");
break;
default:
// nothing to do

View File

@@ -31,22 +31,7 @@ import org.qortal.crypto.Crypto;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.globalization.Translator;
import org.qortal.gui.SysTray;
import org.qortal.repository.ATRepository;
import org.qortal.repository.AccountRepository;
import org.qortal.repository.ArbitraryRepository;
import org.qortal.repository.AssetRepository;
import org.qortal.repository.BlockRepository;
import org.qortal.repository.ChatRepository;
import org.qortal.repository.CrossChainRepository;
import org.qortal.repository.DataException;
import org.qortal.repository.GroupRepository;
import org.qortal.repository.MessageRepository;
import org.qortal.repository.NameRepository;
import org.qortal.repository.NetworkRepository;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.TransactionRepository;
import org.qortal.repository.VotingRepository;
import org.qortal.repository.*;
import org.qortal.repository.hsqldb.transaction.HSQLDBTransactionRepository;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
@@ -69,12 +54,14 @@ public class HSQLDBRepository implements Repository {
protected final Map<String, PreparedStatement> preparedStatementCache = new HashMap<>();
// We want the same object corresponding to the actual DB
protected final Object trimHeightsLock = RepositoryManager.getRepositoryFactory();
protected final Object latestATStatesLock = RepositoryManager.getRepositoryFactory();
private final ATRepository atRepository = new HSQLDBATRepository(this);
private final AccountRepository accountRepository = new HSQLDBAccountRepository(this);
private final ArbitraryRepository arbitraryRepository = new HSQLDBArbitraryRepository(this);
private final AssetRepository assetRepository = new HSQLDBAssetRepository(this);
private final BlockRepository blockRepository = new HSQLDBBlockRepository(this);
private final BlockArchiveRepository blockArchiveRepository = new HSQLDBBlockArchiveRepository(this);
private final ChatRepository chatRepository = new HSQLDBChatRepository(this);
private final CrossChainRepository crossChainRepository = new HSQLDBCrossChainRepository(this);
private final GroupRepository groupRepository = new HSQLDBGroupRepository(this);
@@ -142,6 +129,11 @@ public class HSQLDBRepository implements Repository {
return this.blockRepository;
}
@Override
public BlockArchiveRepository getBlockArchiveRepository() {
return this.blockArchiveRepository;
}
@Override
public ChatRepository getChatRepository() {
return this.chatRepository;

View File

@@ -112,6 +112,32 @@ public class Settings {
* This has a significant effect on execution time. */
private int onlineSignaturesTrimBatchSize = 100; // blocks
/** Whether we should prune old data to reduce database size
* This prevents the node from being able to serve older blocks */
private boolean pruningEnabled = false;
/** The amount of recent blocks we should keep when pruning */
private int pruneBlockLimit = 1450;
/** How often to attempt AT state pruning (ms). */
private long atStatesPruneInterval = 3219L; // milliseconds
/** Block height range to scan for prunable AT states.<br>
* This has a significant effect on execution time. */
private int atStatesPruneBatchSize = 25; // blocks
/** How often to attempt block pruning (ms). */
private long blockPruneInterval = 3219L; // milliseconds
/** Block height range to scan for prunable blocks.<br>
* This has a significant effect on execution time. */
private int blockPruneBatchSize = 10000; // blocks
/** Whether we should archive old data to reduce the database size */
private boolean archiveEnabled = true;
/** How often to attempt archiving (ms). */
private long archiveInterval = 7171L; // milliseconds
// Peer-to-peer related
private boolean isTestNet = false;
/** Port number for inbound peer-to-peer connections. */
@@ -536,4 +562,38 @@ public class Settings {
return this.onlineSignaturesTrimBatchSize;
}
public boolean isPruningEnabled() {
return this.pruningEnabled;
}
public int getPruneBlockLimit() {
return this.pruneBlockLimit;
}
public long getAtStatesPruneInterval() {
return this.atStatesPruneInterval;
}
public int getAtStatesPruneBatchSize() {
return this.atStatesPruneBatchSize;
}
public long getBlockPruneInterval() {
return this.blockPruneInterval;
}
public int getBlockPruneBatchSize() {
return this.blockPruneBatchSize;
}
public boolean isArchiveEnabled() {
return this.archiveEnabled;
}
public long getArchiveInterval() {
return this.archiveInterval;
}
}

View File

@@ -0,0 +1,78 @@
package org.qortal.utils;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import java.util.List;
public class BlockArchiveUtils {
/**
* importFromArchive
* <p>
* Reads the requested block range from the archive
* and imports the BlockData and AT state data hashes
* This can be used to convert a block archive back
* into the HSQLDB, in order to make it SQL-compatible
* again.
* <p>
* Note: calls discardChanges() and saveChanges(), so
* make sure that you commit any existing repository
* changes before calling this method.
*
* @param startHeight The earliest block to import
* @param endHeight The latest block to import
* @param repository A clean repository session
* @throws DataException
*/
public static void importFromArchive(int startHeight, int endHeight, Repository repository) throws DataException {
repository.discardChanges();
final int requestedRange = endHeight+1-startHeight;
List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> blockInfoList =
BlockArchiveReader.getInstance().fetchBlocksFromRange(startHeight, endHeight);
// Ensure that we have received all of the requested blocks
if (blockInfoList == null || blockInfoList.isEmpty()) {
throw new IllegalStateException("No blocks found when importing from archive");
}
if (blockInfoList.size() != requestedRange) {
throw new IllegalStateException("Non matching block count when importing from archive");
}
Triple<BlockData, List<TransactionData>, List<ATStateData>> firstBlock = blockInfoList.get(0);
if (firstBlock == null || firstBlock.getA().getHeight() != startHeight) {
throw new IllegalStateException("Non matching first block when importing from archive");
}
if (blockInfoList.size() > 0) {
Triple<BlockData, List<TransactionData>, List<ATStateData>> lastBlock =
blockInfoList.get(blockInfoList.size() - 1);
if (lastBlock == null || lastBlock.getA().getHeight() != endHeight) {
throw new IllegalStateException("Non matching last block when importing from archive");
}
}
// Everything seems okay, so go ahead with the import
for (Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo : blockInfoList) {
try {
// Save block
repository.getBlockRepository().save(blockInfo.getA());
// Save AT state data hashes
for (ATStateData atStateData : blockInfo.getC()) {
atStateData.setHeight(blockInfo.getA().getHeight());
repository.getATRepository().save(atStateData);
}
} catch (DataException e) {
repository.discardChanges();
throw new IllegalStateException("Unable to import blocks from archive");
}
}
repository.saveChanges();
}
}

View File

@@ -0,0 +1,525 @@
package org.qortal.test;
import org.apache.commons.io.FileUtils;
import org.ciyam.at.CompilationException;
import org.ciyam.at.MachineState;
import org.ciyam.at.OpCode;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.controller.BlockMinter;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.DeployAtTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.group.Group;
import org.qortal.repository.*;
import org.qortal.repository.hsqldb.HSQLDBRepository;
import org.qortal.settings.Settings;
import org.qortal.test.common.AtUtils;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.test.common.TransactionUtils;
import org.qortal.transaction.DeployAtTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.transform.TransformationException;
import org.qortal.utils.BlockArchiveUtils;
import org.qortal.utils.Triple;
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import static org.junit.Assert.*;
public class BlockArchiveTests extends Common {
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings(); // Necessary to set NTP offset
Common.useSettings("test-settings-v2-block-archive.json");
this.deleteArchiveDirectory();
}
@After
public void afterTest() throws DataException {
this.deleteArchiveDirectory();
}
@Test
public void testWriter() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(900 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
}
}
@Test
public void testWriterAndReader() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(900 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Read block 2 from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> block2Info = reader.fetchBlockAtHeight(2);
BlockData block2ArchiveData = block2Info.getA();
// Read block 2 from the repository
BlockData block2RepositoryData = repository.getBlockRepository().fromHeight(2);
// Ensure the values match
assertEquals(block2ArchiveData.getHeight(), block2RepositoryData.getHeight());
assertArrayEquals(block2ArchiveData.getSignature(), block2RepositoryData.getSignature());
// Test some values in the archive
assertEquals(1, block2ArchiveData.getOnlineAccountsCount());
// Read block 900 from the archive
Triple<BlockData, List<TransactionData>, List<ATStateData>> block900Info = reader.fetchBlockAtHeight(900);
BlockData block900ArchiveData = block900Info.getA();
// Read block 900 from the repository
BlockData block900RepositoryData = repository.getBlockRepository().fromHeight(900);
// Ensure the values match
assertEquals(block900ArchiveData.getHeight(), block900RepositoryData.getHeight());
assertArrayEquals(block900ArchiveData.getSignature(), block900RepositoryData.getSignature());
// Test some values in the archive
assertEquals(1, block900ArchiveData.getOnlineAccountsCount());
}
}
@Test
public void testArchivedAtStates() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 10; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// 9 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(10);
repository.getATRepository().setAtTrimHeight(10);
// Check the max archive height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(9, maximumArchiveHeight);
// Write blocks 2-9 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(9 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(9 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Check blocks 3-9
for (Integer testHeight = 2; testHeight <= 9; testHeight++) {
// Read a block from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = reader.fetchBlockAtHeight(testHeight);
BlockData archivedBlockData = blockInfo.getA();
ATStateData archivedAtStateData = blockInfo.getC().isEmpty() ? null : blockInfo.getC().get(0);
List<TransactionData> archivedTransactions = blockInfo.getB();
// Read the same block from the repository
BlockData repositoryBlockData = repository.getBlockRepository().fromHeight(testHeight);
ATStateData repositoryAtStateData = repository.getATRepository().getATStateAtHeight(atAddress, testHeight);
// Ensure the repository has full AT state data
assertNotNull(repositoryAtStateData.getStateHash());
assertNotNull(repositoryAtStateData.getStateData());
// Check the archived AT state
if (testHeight == 2) {
// Block 2 won't have an AT state hash because it's initial (and has the DEPLOY_AT in the same block)
assertNull(archivedAtStateData);
assertEquals(1, archivedTransactions.size());
assertEquals(Transaction.TransactionType.DEPLOY_AT, archivedTransactions.get(0).getType());
}
else {
// For blocks 3+, ensure the archive has the AT state data, but not the hashes
assertNotNull(archivedAtStateData.getStateHash());
assertNull(archivedAtStateData.getStateData());
// They also shouldn't have any transactions
assertTrue(archivedTransactions.isEmpty());
}
// Also check the online accounts count and height
assertEquals(1, archivedBlockData.getOnlineAccountsCount());
assertEquals(testHeight, archivedBlockData.getHeight());
// Ensure the values match
assertEquals(archivedBlockData.getHeight(), repositoryBlockData.getHeight());
assertArrayEquals(archivedBlockData.getSignature(), repositoryBlockData.getSignature());
assertEquals(archivedBlockData.getOnlineAccountsCount(), repositoryBlockData.getOnlineAccountsCount());
assertArrayEquals(archivedBlockData.getMinterSignature(), repositoryBlockData.getMinterSignature());
assertEquals(archivedBlockData.getATCount(), repositoryBlockData.getATCount());
assertEquals(archivedBlockData.getOnlineAccountsCount(), repositoryBlockData.getOnlineAccountsCount());
assertArrayEquals(archivedBlockData.getReference(), repositoryBlockData.getReference());
assertEquals(archivedBlockData.getTimestamp(), repositoryBlockData.getTimestamp());
assertEquals(archivedBlockData.getATFees(), repositoryBlockData.getATFees());
assertEquals(archivedBlockData.getTotalFees(), repositoryBlockData.getTotalFees());
assertEquals(archivedBlockData.getTransactionCount(), repositoryBlockData.getTransactionCount());
assertArrayEquals(archivedBlockData.getTransactionsSignature(), repositoryBlockData.getTransactionsSignature());
if (testHeight != 2) {
assertArrayEquals(archivedAtStateData.getStateHash(), repositoryAtStateData.getStateHash());
}
}
// Check block 10 (unarchived)
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = reader.fetchBlockAtHeight(10);
assertNull(blockInfo);
}
}
@Test
public void testArchiveAndPrune() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Assume 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(900 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Ensure the SQL repository contains blocks 2 and 900...
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(900));
// Prune all the archived blocks
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 900);
assertEquals(900-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(901);
// Prune the AT states for the archived blocks
repository.getATRepository().rebuildLatestAtStates();
int numATStatesPruned = repository.getATRepository().pruneAtStates(0, 900);
assertEquals(900-1, numATStatesPruned);
repository.getATRepository().setAtPruneHeight(901);
// Now ensure the SQL repository is missing blocks 2 and 900...
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(900));
// ... but it's not missing blocks 1 and 901 (we don't prune the genesis block)
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(901));
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
}
}
@Test
public void testTrimArchivePruneAndOrphan() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Make sure that block 500 has full AT state data and data hash
List<ATStateData> block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
// Trim the first 500 blocks
repository.getBlockRepository().trimOldOnlineAccountsSignatures(0, 500);
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(501);
repository.getATRepository().trimAtStates(0, 500, 1000);
repository.getATRepository().setAtTrimHeight(501);
// Now block 500 should only have the AT state data hash
block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNull(atStatesData.getStateData());
// ... but block 501 should have the full data
List<ATStateData> block501AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(501);
atStatesData = repository.getATRepository().getATStateAtHeight(block501AtStatesData.get(0).getATAddress(), 501);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(500, maximumArchiveHeight);
BlockData block3DataPreArchive = repository.getBlockRepository().fromHeight(3);
// Write blocks 2-500 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(500 - 1, writer.getWrittenCount()); // -1 for the genesis block
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(500 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Ensure the SQL repository contains blocks 2 and 500...
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(500));
// Prune all the archived blocks
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 500);
assertEquals(500-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(501);
// Prune the AT states for the archived blocks
repository.getATRepository().rebuildLatestAtStates();
int numATStatesPruned = repository.getATRepository().pruneAtStates(2, 500);
assertEquals(499, numATStatesPruned);
repository.getATRepository().setAtPruneHeight(501);
// Now ensure the SQL repository is missing blocks 2 and 500...
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(500));
// ... but it's not missing blocks 1 and 501 (we don't prune the genesis block)
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(501));
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
// Now orphan some unarchived blocks.
BlockUtils.orphanBlocks(repository, 500);
assertEquals(502, (int) repository.getBlockRepository().getLastBlock().getHeight());
// We're close to the lower limit of the SQL database now, so
// we need to import some blocks from the archive
BlockArchiveUtils.importFromArchive(401, 500, repository);
// Ensure the SQL repository now contains block 401 but not 400...
assertNotNull(repository.getBlockRepository().fromHeight(401));
assertNull(repository.getBlockRepository().fromHeight(400));
// Import the remaining 399 blocks
BlockArchiveUtils.importFromArchive(2, 400, repository);
// Verify that block 3 matches the original
BlockData block3DataPostArchive = repository.getBlockRepository().fromHeight(3);
assertArrayEquals(block3DataPreArchive.getSignature(), block3DataPostArchive.getSignature());
assertEquals(block3DataPreArchive.getHeight(), block3DataPostArchive.getHeight());
// Orphan 1 more block, which should be the last one that is possible to be orphaned
BlockUtils.orphanBlocks(repository, 1);
// Orphan another block, which should fail
Exception exception = null;
try {
BlockUtils.orphanBlocks(repository, 1);
} catch (DataException e) {
exception = e;
}
// Ensure that a DataException is thrown because there is no more AT states data available
assertNotNull(exception);
assertEquals(DataException.class, exception.getClass());
// FUTURE: we may be able to retain unique AT states when trimming, to avoid this exception
// and allow orphaning back through blocks with trimmed AT states.
}
}
/**
* Many nodes are missing an ATStatesHeightIndex due to an earlier bug
* In these cases we disable archiving and pruning as this index is a
* very essential component in these processes.
*/
@Test
public void testMissingAtStatesHeightIndex() throws DataException, SQLException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
// Firstly check that we're able to prune or archive when the index exists
assertTrue(repository.getATRepository().hasAtStatesHeightIndex());
assertTrue(RepositoryManager.canArchiveOrPrune());
// Delete the index
repository.prepareStatement("DROP INDEX ATSTATESHEIGHTINDEX").execute();
// Ensure check that we're unable to prune or archive when the index doesn't exist
assertFalse(repository.getATRepository().hasAtStatesHeightIndex());
assertFalse(RepositoryManager.canArchiveOrPrune());
}
}
private void deleteArchiveDirectory() {
// Delete archive directory if exists
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
try {
FileUtils.deleteDirectory(archivePath.toFile());
} catch (IOException e) {
}
}
}

View File

@@ -0,0 +1,91 @@
package org.qortal.test;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.controller.BlockMinter;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.AtUtils;
import org.qortal.test.common.Common;
import java.util.ArrayList;
import java.util.List;
import static org.junit.Assert.*;
public class PruneTests extends Common {
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
}
@Test
public void testPruning() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks
for (int i = 2; i <= 10; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Make sure that all blocks have full AT state data and data hash
for (Integer i=2; i <= 10; i++) {
BlockData blockData = repository.getBlockRepository().fromHeight(i);
assertNotNull(blockData.getSignature());
assertEquals(i, blockData.getHeight());
List<ATStateData> atStatesDataList = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStatesDataList);
assertFalse(atStatesDataList.isEmpty());
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(atStatesDataList.get(0).getATAddress(), i);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
}
// Prune blocks 2-5
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 5);
assertEquals(4, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(6);
// Prune AT states for blocks 2-5
int numATStatesPruned = repository.getATRepository().pruneAtStates(0, 5);
assertEquals(4, numATStatesPruned);
repository.getATRepository().setAtPruneHeight(6);
// Make sure that blocks 2-5 are now missing block data and AT states data
for (Integer i=2; i <= 5; i++) {
BlockData blockData = repository.getBlockRepository().fromHeight(i);
assertNull(blockData);
List<ATStateData> atStatesDataList = repository.getATRepository().getBlockATStatesAtHeight(i);
assertTrue(atStatesDataList.isEmpty());
}
// ... but blocks 6-10 have block data and full AT states data
for (Integer i=6; i <= 10; i++) {
BlockData blockData = repository.getBlockRepository().fromHeight(i);
assertNotNull(blockData.getSignature());
List<ATStateData> atStatesDataList = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStatesDataList);
assertFalse(atStatesDataList.isEmpty());
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(atStatesDataList.get(0).getATAddress(), i);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
}
}
}
}

View File

@@ -21,6 +21,7 @@ import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.AtUtils;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.test.common.TransactionUtils;
@@ -35,13 +36,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetATStateAtHeightWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -58,13 +59,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetATStateAtHeightWithoutData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -75,7 +76,7 @@ public class AtRepositoryTests extends Common {
Integer testHeight = maxHeight - 2;
// Trim AT state data
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.getATRepository().trimAtStates(2, maxHeight, 1000);
ATStateData atStateData = repository.getATRepository().getATStateAtHeight(atAddress, testHeight);
@@ -87,13 +88,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetLatestATStateWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -111,13 +112,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetLatestATStatePostTrimming() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -129,7 +130,7 @@ public class AtRepositoryTests extends Common {
Integer testHeight = blockchainHeight;
// Trim AT state data
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
// COMMIT to check latest AT states persist / TEMPORARY table interaction
repository.saveChanges();
@@ -144,14 +145,66 @@ public class AtRepositoryTests extends Common {
}
@Test
public void testGetMatchingFinalATStatesWithoutDataValue() throws DataException {
byte[] creationBytes = buildSimpleAT();
public void testOrphanTrimmedATStates() throws DataException {
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
for (int i = 0; i < 10; ++i)
BlockUtils.mintBlock(repository);
int blockchainHeight = repository.getBlockRepository().getBlockchainHeight();
int maxTrimHeight = blockchainHeight - 4;
Integer testHeight = maxTrimHeight + 1;
// Trim AT state data
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
repository.getATRepository().trimAtStates(2, maxTrimHeight, 1000);
// Orphan 3 blocks
// This leaves one more untrimmed block, so the latest AT state should be available
BlockUtils.orphanBlocks(repository, 3);
ATStateData atStateData = repository.getATRepository().getLatestATState(atAddress);
assertEquals(testHeight, atStateData.getHeight());
// We should always have the latest AT state data available
assertNotNull(atStateData.getStateData());
// Orphan 1 more block
Exception exception = null;
try {
BlockUtils.orphanBlocks(repository, 1);
} catch (DataException e) {
exception = e;
}
// Ensure that a DataException is thrown because there is no more AT states data available
assertNotNull(exception);
assertEquals(DataException.class, exception.getClass());
assertEquals(String.format("Can't find previous AT state data for %s", atAddress), exception.getMessage());
// FUTURE: we may be able to retain unique AT states when trimming, to avoid this exception
// and allow orphaning back through blocks with trimmed AT states.
}
}
@Test
public void testGetMatchingFinalATStatesWithoutDataValue() throws DataException {
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -191,13 +244,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetMatchingFinalATStatesWithDataValue() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -237,13 +290,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetBlockATStatesAtHeightWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
doDeploy(repository, deployer, creationBytes, fundingAmount);
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint a few blocks
for (int i = 0; i < 10; ++i)
@@ -264,13 +317,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetBlockATStatesAtHeightWithoutData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
doDeploy(repository, deployer, creationBytes, fundingAmount);
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint a few blocks
for (int i = 0; i < 10; ++i)
@@ -280,7 +333,7 @@ public class AtRepositoryTests extends Common {
Integer testHeight = maxHeight - 2;
// Trim AT state data
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.getATRepository().trimAtStates(2, maxHeight, 1000);
List<ATStateData> atStates = repository.getATRepository().getBlockATStatesAtHeight(testHeight);
@@ -297,13 +350,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testSaveATStateWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -328,13 +381,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testSaveATStateWithoutData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -364,67 +417,4 @@ public class AtRepositoryTests extends Common {
assertNull(atStateData.getStateData());
}
}
private byte[] buildSimpleAT() {
// Pretend we use 4 values in data segment
int addrCounter = 4;
// Data segment
ByteBuffer dataByteBuffer = ByteBuffer.allocate(addrCounter * MachineState.VALUE_SIZE);
ByteBuffer codeByteBuffer = ByteBuffer.allocate(512);
// Two-pass version
for (int pass = 0; pass < 2; ++pass) {
codeByteBuffer.clear();
try {
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
} catch (CompilationException e) {
throw new IllegalStateException("Unable to compile AT?", e);
}
}
codeByteBuffer.flip();
byte[] codeBytes = new byte[codeByteBuffer.limit()];
codeByteBuffer.get(codeBytes);
final short ciyamAtVersion = 2;
final short numCallStackPages = 0;
final short numUserStackPages = 0;
final long minActivationAmount = 0L;
return MachineState.toCreationBytes(ciyamAtVersion, codeBytes, dataByteBuffer.array(), numCallStackPages, numUserStackPages, minActivationAmount);
}
private DeployAtTransaction doDeploy(Repository repository, PrivateKeyAccount deployer, byte[] creationBytes, long fundingAmount) throws DataException {
long txTimestamp = System.currentTimeMillis();
byte[] lastReference = deployer.getLastReference();
if (lastReference == null) {
System.err.println(String.format("Qortal account %s has no last reference", deployer.getAddress()));
System.exit(2);
}
Long fee = null;
String name = "Test AT";
String description = "Test AT";
String atType = "Test";
String tags = "TEST";
BaseTransactionData baseTransactionData = new BaseTransactionData(txTimestamp, Group.NO_GROUP, lastReference, deployer.getPublicKey(), fee, null);
TransactionData deployAtTransactionData = new DeployAtTransactionData(baseTransactionData, name, description, atType, tags, creationBytes, fundingAmount, Asset.QORT);
DeployAtTransaction deployAtTransaction = new DeployAtTransaction(repository, deployAtTransactionData);
fee = deployAtTransaction.calcRecommendedFee();
deployAtTransactionData.setFee(fee);
TransactionUtils.signAndMint(repository, deployAtTransactionData, deployer);
return deployAtTransaction;
}
}

View File

@@ -0,0 +1,81 @@
package org.qortal.test.common;
import org.ciyam.at.CompilationException;
import org.ciyam.at.MachineState;
import org.ciyam.at.OpCode;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.DeployAtTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.transaction.DeployAtTransaction;
import java.nio.ByteBuffer;
public class AtUtils {
public static byte[] buildSimpleAT() {
// Pretend we use 4 values in data segment
int addrCounter = 4;
// Data segment
ByteBuffer dataByteBuffer = ByteBuffer.allocate(addrCounter * MachineState.VALUE_SIZE);
ByteBuffer codeByteBuffer = ByteBuffer.allocate(512);
// Two-pass version
for (int pass = 0; pass < 2; ++pass) {
codeByteBuffer.clear();
try {
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
} catch (CompilationException e) {
throw new IllegalStateException("Unable to compile AT?", e);
}
}
codeByteBuffer.flip();
byte[] codeBytes = new byte[codeByteBuffer.limit()];
codeByteBuffer.get(codeBytes);
final short ciyamAtVersion = 2;
final short numCallStackPages = 0;
final short numUserStackPages = 0;
final long minActivationAmount = 0L;
return MachineState.toCreationBytes(ciyamAtVersion, codeBytes, dataByteBuffer.array(), numCallStackPages, numUserStackPages, minActivationAmount);
}
public static DeployAtTransaction doDeployAT(Repository repository, PrivateKeyAccount deployer, byte[] creationBytes, long fundingAmount) throws DataException {
long txTimestamp = System.currentTimeMillis();
byte[] lastReference = deployer.getLastReference();
if (lastReference == null) {
System.err.println(String.format("Qortal account %s has no last reference", deployer.getAddress()));
System.exit(2);
}
Long fee = null;
String name = "Test AT";
String description = "Test AT";
String atType = "Test";
String tags = "TEST";
BaseTransactionData baseTransactionData = new BaseTransactionData(txTimestamp, Group.NO_GROUP, lastReference, deployer.getPublicKey(), fee, null);
TransactionData deployAtTransactionData = new DeployAtTransactionData(baseTransactionData, name, description, atType, tags, creationBytes, fundingAmount, Asset.QORT);
DeployAtTransaction deployAtTransaction = new DeployAtTransaction(repository, deployAtTransactionData);
fee = deployAtTransaction.calcRecommendedFee();
deployAtTransactionData.setFee(fee);
TransactionUtils.signAndMint(repository, deployAtTransactionData, deployer);
return deployAtTransaction;
}
}

View File

@@ -0,0 +1,11 @@
{
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2.json",
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 1450,
"repositoryPath": "dbtest"
}