Compare commits

...

23 Commits

Author SHA1 Message Date
CalDescent
b9a0d489d7 Bump version to 1.4.5 2021-03-21 17:06:10 +00:00
catbref
d9d4c4c302 Bump Peer response timeout from 2s to 3s 2021-03-21 16:17:40 +00:00
catbref
81c6d75d62 Adjust Synchronizer.MAXIMUM_BLOCK_STEP to 128, which means final summaries request will have enough to cover MAXIMUM_COMMON_DELTA (8+16+32+64+128 = 248, which is >240) 2021-03-21 16:12:41 +00:00
catbref
d1419bdfbd Minor comments, adjust max step size when searching for common block 2021-03-21 15:57:00 +00:00
CalDescent
8566d9b7e5 Merge branch 'master' into synchronization-improvements 2021-03-21 15:04:43 +00:00
catbref
b319d6db6b Rework BlockMessage caching with new pseudo outgoing-only message that only caches raw bytes 2021-03-21 14:14:15 +00:00
CalDescent
35fd1d8455 Base58 encode signatures in recently added logs. 2021-03-21 14:12:04 +00:00
CalDescent
be21771e49 Use SYNC_BATCH_SIZE instead of MAXIMUM_BLOCK_SIGNATURES_PER_REQUEST. 2021-03-21 13:58:42 +00:00
catbref
745528a9b1 Peer.sendMessage() should return false when it can't send because it can't build the message 2021-03-21 13:19:59 +00:00
CalDescent
f1422af95b Added retry mechanisms in Synchronizer.syncToPeerChain()
Until now, we required a perfect success rate when syncing with a peer via Synchronizer.syncToPeerChain(). Blocks were requested individually, but the node would give up and lose all progress if a single request failed. In practice, this happened very regularly, and it was difficult to succeed when there were a large number of blocks (e.g. 20+) that needed to be requested.

This commit adds two retry mechanisms, causing each of the two request types (block sigs and blocks) to retry 3 times before giving up, potentially avoiding a lot of wasted work. The number of retries is configurable in the MAXIMUM_RETRIES constant, which we could move to settings at some point if this feature proves useful.

The original issue seemed to result in a few side effects:

1. Nodes would spend a large amount of time requesting blocks from peers, only to throw it all away afterwards. This potentially added to network congestion, as nodes were using unnecessary network time to unproductively serve peers.

2. A large number of sync attempts were failing, particularly when a fork emerged with a significant number of divergent blocks (20+). This issue reduced the ability for nodes to sync to the correct chain while they still had time to do so. With every block that passed, it became made it more and more difficult to switch to the correct chain. Eventually, the correct chain would become TOO_DIVERGENT at which point there is no way to automatically switch without manual intervention. I hope that this retry mechanism will increase the chances of nodes automatically moving onto the right chain quickly, avoiding the need for a user to intervene.

3. The POST /admin/forcesync API was unlikely to succeed when the peer's chain had started to diverge from the user's chain. This should increase the success rate.

Also included in this commit is a MAXIMUM_BLOCK_SIGNATURES_PER_REQUEST constant. This limits the number of block sigs requested in each batch (default 200). Without this, we are unable to increase MAXIMUM_COMMON_DELTA because it can try and request thousands of block sigs at once, which unsurprisingly doesn't succeed.
2021-03-21 09:41:36 +00:00
CalDescent
f92f4dc1e2 Fixed some log entries in Controller.syncToPeerChain() which were incorrectly reporting our height instead of the height of block(s) being requested from the peer. Now reporting the height of the block (or block sigs) being retrieved, which should make it easier to interpret the logs. 2021-03-20 16:18:25 +00:00
catbref
019cfdc1db Minor comment re-org 2021-03-20 11:45:11 +00:00
CalDescent
e694a51cdd Fix for "numberSignaturesRequired" calculation error in Synchronizer.syncToPeerChain()
This bug often prevented the correct amount of block signatures (and blocks) from being requested from a peer, when trying to sync to it.

It could result in quite serious consequences, as it would trigger orphaning back to the common block without first requesting all of the necessary blocks from the peer's chain. Rather than applying a complete copy of the peer's chain, it could orphan back to the common block and then only apply a few blocks beyond that, leaving the node in an unexpected state, potentially hundreds of blocks behind the peer's current height, which it then has to try and obtain from other peers.

When there are forks present, this could result in it hopping from chain to chain, each time being unable to fully synchronise with the peer. Given that we currently discard our chain if it is deemed that our latest block isn't "recent", it is very important that nodes are brought up to the latest block when synchronising with a peer, to avoid constantly triggering discards.

The severity of this bug increased when there was a large disparity between the peer's latest block and the common block height, and prevented us from being able to increase MAXIMUM_COMMON_DELTA.
2021-03-20 10:33:23 +00:00
catbref
4824c4198b Bump version to 1.4.4 2021-03-15 11:00:20 +00:00
catbref
ec7d4f4498 Changed "too busy" logging from debug to trace 2021-03-13 18:30:43 +00:00
catbref
d635de44a8 Added TODO in HSQLDBRepository about deadlock log spam 2021-03-13 18:29:31 +00:00
catbref
bce66bf57f Move HSQLDBRepositoryFactory.POOL_SIZE into Settings as "repositoryConnectionPoolSize" 2021-03-13 18:14:11 +00:00
catbref
0fc5153f9b Merge 'trade-bot-timeout-fix' into master 2021-03-13 17:13:40 +00:00
catbref
0398c2fae1 Try to avoid clogging up network threads by discarding incoming TRANSACTION messages if we're too busy
As importing a transaction requires blockchain lock, all the network threads
can be used up blocking for that lock, especially if Synchronizer is active.

So we simply discard incoming TRANSACTION messages if we can't immediately
obtain the blockchain lock. Some other peer will probably attempt to
send the transaction soon again anyway.

Plus we swap transaction lists after connection handshake.
2021-03-13 17:03:38 +00:00
CalDescent
5fc495eb6a Fix for possible logic bug introduced in commit 33a8f31. 2021-03-12 22:05:38 +00:00
CalDescent
7918622e2e Merge pull request #31 from sakumatto/master
Initial Italian translation by Pabs 2021
2021-03-11 11:06:03 +00:00
CalDescent
427fa1816d "blockCacheSize" can now be configured via settings.json. 2021-03-07 10:00:49 +00:00
sakumatto
384dffbf9a Initial Italian translation by Pabs 2021
UI localized to Italian by @Pabs
2021-02-22 20:03:11 +02:00
13 changed files with 497 additions and 34 deletions

View File

@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>1.4.3</version>
<version>1.4.5</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>

View File

@@ -67,8 +67,8 @@ import org.qortal.gui.SysTray;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.ArbitraryDataMessage;
import org.qortal.network.message.BlockMessage;
import org.qortal.network.message.BlockSummariesMessage;
import org.qortal.network.message.CachedBlockMessage;
import org.qortal.network.message.GetArbitraryDataMessage;
import org.qortal.network.message.GetBlockMessage;
import org.qortal.network.message.GetBlockSummariesMessage;
@@ -143,16 +143,15 @@ public class Controller extends Thread {
private ExecutorService callbackExecutor = Executors.newFixedThreadPool(3);
private volatile boolean notifyGroupMembershipChange = false;
private static final int BLOCK_CACHE_SIZE = 10; // To cover typical Synchronizer request + a few spare
/** Latest blocks on our chain. Note: tail/last is the latest block. */
private final Deque<BlockData> latestBlocks = new LinkedList<>();
/** Cache of BlockMessages, indexed by block signature */
@SuppressWarnings("serial")
private final LinkedHashMap<ByteArray, BlockMessage> blockMessageCache = new LinkedHashMap<>() {
private final LinkedHashMap<ByteArray, CachedBlockMessage> blockMessageCache = new LinkedHashMap<>() {
@Override
protected boolean removeEldestEntry(Map.Entry<ByteArray, BlockMessage> eldest) {
return this.size() > BLOCK_CACHE_SIZE;
protected boolean removeEldestEntry(Map.Entry<ByteArray, CachedBlockMessage> eldest) {
return this.size() > Settings.getInstance().getBlockCacheSize();
}
};
@@ -319,11 +318,12 @@ public class Controller extends Thread {
// Set initial chain height/tip
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().getLastBlock();
int blockCacheSize = Settings.getInstance().getBlockCacheSize();
synchronized (this.latestBlocks) {
this.latestBlocks.clear();
for (int i = 0; i < BLOCK_CACHE_SIZE && blockData != null; ++i) {
for (int i = 0; i < blockCacheSize && blockData != null; ++i) {
this.latestBlocks.addFirst(blockData);
blockData = repository.getBlockRepository().fromHeight(blockData.getHeight() - 1);
}
@@ -933,6 +933,7 @@ public class Controller extends Thread {
public void onNewBlock(BlockData latestBlockData) {
// Protective copy
BlockData blockDataCopy = new BlockData(latestBlockData);
int blockCacheSize = Settings.getInstance().getBlockCacheSize();
synchronized (this.latestBlocks) {
BlockData cachedChainTip = this.latestBlocks.peekLast();
@@ -942,7 +943,7 @@ public class Controller extends Thread {
this.latestBlocks.addLast(latestBlockData);
// Trim if necessary
if (this.latestBlocks.size() >= BLOCK_CACHE_SIZE)
if (this.latestBlocks.size() >= blockCacheSize)
this.latestBlocks.pollFirst();
} else {
if (cachedChainTip != null)
@@ -1150,14 +1151,15 @@ public class Controller extends Thread {
ByteArray signatureAsByteArray = new ByteArray(signature);
BlockMessage cachedBlockMessage = this.blockMessageCache.get(signatureAsByteArray);
CachedBlockMessage cachedBlockMessage = this.blockMessageCache.get(signatureAsByteArray);
int blockCacheSize = Settings.getInstance().getBlockCacheSize();
// Check cached latest block message
if (cachedBlockMessage != null) {
this.stats.getBlockMessageStats.cacheHits.incrementAndGet();
// We need to duplicate it to prevent multiple threads setting ID on the same message
BlockMessage clonedBlockMessage = cachedBlockMessage.cloneWithNewId(message.getId());
CachedBlockMessage clonedBlockMessage = cachedBlockMessage.cloneWithNewId(message.getId());
if (!peer.sendMessage(clonedBlockMessage))
peer.disconnect("failed to send block");
@@ -1185,15 +1187,18 @@ public class Controller extends Thread {
Block block = new Block(repository, blockData);
BlockMessage blockMessage = new BlockMessage(block);
CachedBlockMessage blockMessage = new CachedBlockMessage(block);
blockMessage.setId(message.getId());
// This call also causes the other needed data to be pulled in from repository
if (!peer.sendMessage(blockMessage))
if (!peer.sendMessage(blockMessage)) {
peer.disconnect("failed to send block");
// Don't fall-through to caching because failure to send might be from failure to build message
return;
}
// If request is for a recent block, cache it
if (getChainHeight() - blockData.getHeight() <= BLOCK_CACHE_SIZE) {
if (getChainHeight() - blockData.getHeight() <= blockCacheSize) {
this.stats.getBlockMessageStats.cacheFills.incrementAndGet();
this.blockMessageCache.put(new ByteArray(blockData.getSignature()), blockMessage);
@@ -1207,6 +1212,18 @@ public class Controller extends Thread {
TransactionMessage transactionMessage = (TransactionMessage) message;
TransactionData transactionData = transactionMessage.getTransactionData();
/*
* If we can't obtain blockchain lock immediately,
* e.g. Synchronizer is active, or another transaction is taking a while to validate,
* then we're using up a network thread for ages and clogging things up
* so bail out early
*/
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock()) {
LOGGER.trace(() -> String.format("Too busy to import %s transaction %s from peer %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature()), peer));
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
Transaction transaction = Transaction.fromData(repository, transactionData);
@@ -1236,6 +1253,8 @@ public class Controller extends Thread {
LOGGER.debug(() -> String.format("Imported %s transaction %s from peer %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature()), peer));
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing transaction %s from peer %s", Base58.encode(transactionData.getSignature()), peer), e);
} finally {
blockchainLock.unlock();
}
}

View File

@@ -39,10 +39,23 @@ public class Synchronizer {
private static final Logger LOGGER = LogManager.getLogger(Synchronizer.class);
/** Max number of new blocks we aim to add to chain tip in each sync round */
private static final int SYNC_BATCH_SIZE = 200; // XXX move to Settings?
/** Initial jump back of block height when searching for common block with peer */
private static final int INITIAL_BLOCK_STEP = 8;
private static final int MAXIMUM_BLOCK_STEP = 500;
/** Maximum jump back of block height when searching for common block with peer */
private static final int MAXIMUM_BLOCK_STEP = 128;
/** Maximum difference in block height between tip and peer's common block before peer is considered TOO DIVERGENT */
private static final int MAXIMUM_COMMON_DELTA = 240; // XXX move to Settings?
private static final int SYNC_BATCH_SIZE = 200;
/** Maximum number of block signatures we ask from peer in one go */
private static final int MAXIMUM_REQUEST_SIZE = 200; // XXX move to Settings?
/** Number of retry attempts if a peer fails to respond with the requested data */
private static final int MAXIMUM_RETRIES = 3; // XXX move to Settings?
private static Synchronizer instance;
@@ -350,46 +363,88 @@ public class Synchronizer {
// Convert any leftover (post-common) block summaries into signatures to request from peer
List<byte[]> peerBlockSignatures = peerBlockSummaries.stream().map(BlockSummaryData::getSignature).collect(Collectors.toList());
// Calculate the total number of additional blocks this peer has beyond the common block
int additionalPeerBlocksAfterCommonBlock = peerHeight - commonBlockHeight;
// Subtract the number of signatures that we already have, as we don't need to request them again
int numberSignaturesRequired = additionalPeerBlocksAfterCommonBlock - peerBlockSignatures.size();
// Fetch remaining block signatures, if needed
int numberSignaturesRequired = peerBlockSignatures.size() - (peerHeight - commonBlockHeight);
if (numberSignaturesRequired > 0) {
int retryCount = 0;
while (numberSignaturesRequired > 0) {
byte[] latestPeerSignature = peerBlockSignatures.isEmpty() ? commonBlockSig : peerBlockSignatures.get(peerBlockSignatures.size() - 1);
int lastPeerHeight = commonBlockHeight + peerBlockSignatures.size();
int numberOfSignaturesToRequest = Math.min(numberSignaturesRequired, MAXIMUM_REQUEST_SIZE);
LOGGER.trace(String.format("Requesting %d signature%s after height %d, sig %.8s",
numberSignaturesRequired, (numberSignaturesRequired != 1 ? "s": ""), ourHeight, Base58.encode(latestPeerSignature)));
numberOfSignaturesToRequest, (numberOfSignaturesToRequest != 1 ? "s": ""), lastPeerHeight, Base58.encode(latestPeerSignature)));
List<byte[]> moreBlockSignatures = this.getBlockSignatures(peer, latestPeerSignature, numberSignaturesRequired);
List<byte[]> moreBlockSignatures = this.getBlockSignatures(peer, latestPeerSignature, numberOfSignaturesToRequest);
if (moreBlockSignatures == null || moreBlockSignatures.isEmpty()) {
LOGGER.info(String.format("Peer %s failed to respond with more block signatures after height %d, sig %.8s", peer,
ourHeight, Base58.encode(latestPeerSignature)));
return SynchronizationResult.NO_REPLY;
lastPeerHeight, Base58.encode(latestPeerSignature)));
if (retryCount >= MAXIMUM_RETRIES) {
// Give up with this peer
return SynchronizationResult.NO_REPLY;
}
else {
// Retry until retryCount reaches MAXIMUM_RETRIES
retryCount++;
int triesRemaining = MAXIMUM_RETRIES - retryCount;
LOGGER.info(String.format("Re-issuing request to peer %s (%d attempt%s remaining)", peer, triesRemaining, (triesRemaining != 1 ? "s": "")));
continue;
}
}
// Reset retryCount because the last request succeeded
retryCount = 0;
LOGGER.trace(String.format("Received %s signature%s", peerBlockSignatures.size(), (peerBlockSignatures.size() != 1 ? "s" : "")));
peerBlockSignatures.addAll(moreBlockSignatures);
numberSignaturesRequired = additionalPeerBlocksAfterCommonBlock - peerBlockSignatures.size();
}
// Fetch blocks using signatures
LOGGER.debug(String.format("Fetching new blocks from peer %s", peer));
LOGGER.debug(String.format("Fetching new blocks from peer %s after height %d", peer, commonBlockHeight));
List<Block> peerBlocks = new ArrayList<>();
for (byte[] blockSignature : peerBlockSignatures) {
retryCount = 0;
while (peerBlocks.size() < peerBlockSignatures.size()) {
byte[] blockSignature = peerBlockSignatures.get(peerBlocks.size());
LOGGER.debug(String.format("Fetching block with signature %.8s", Base58.encode(blockSignature)));
int blockHeightToRequest = commonBlockHeight + peerBlocks.size() + 1; // +1 because we are requesting the next block, beyond what we already have in the peerBlocks array
Block newBlock = this.fetchBlock(repository, peer, blockSignature);
if (newBlock == null) {
LOGGER.info(String.format("Peer %s failed to respond with block for height %d, sig %.8s", peer,
ourHeight, Base58.encode(blockSignature)));
return SynchronizationResult.NO_REPLY;
LOGGER.info(String.format("Peer %s failed to respond with block for height %d, sig %.8s", peer, blockHeightToRequest, Base58.encode(blockSignature)));
if (retryCount >= MAXIMUM_RETRIES) {
// Give up with this peer
return SynchronizationResult.NO_REPLY;
}
else {
// Retry until retryCount reaches MAXIMUM_RETRIES
retryCount++;
int triesRemaining = MAXIMUM_RETRIES - retryCount;
LOGGER.info(String.format("Re-issuing request to peer %s (%d attempt%s remaining)", peer, triesRemaining, (triesRemaining != 1 ? "s": "")));
continue;
}
}
if (!newBlock.isSignatureValid()) {
LOGGER.info(String.format("Peer %s sent block with invalid signature for height %d, sig %.8s", peer,
ourHeight, Base58.encode(blockSignature)));
blockHeightToRequest, Base58.encode(blockSignature)));
return SynchronizationResult.INVALID_DATA;
}
// Reset retryCount because the last request succeeded
retryCount = 0;
LOGGER.debug(String.format("Received block with height %d, sig: %.8s", newBlock.getBlockData().getHeight(), Base58.encode(blockSignature)));
// Transactions are transmitted without approval status so determine that now
for (Transaction transaction : newBlock.getTransactions())
transaction.setInitialApprovalStatus();
@@ -425,7 +480,7 @@ public class Synchronizer {
ValidationResult blockResult = newBlock.isValid();
if (blockResult != ValidationResult.OK) {
LOGGER.info(String.format("Peer %s sent invalid block for height %d, sig %.8s: %s", peer,
ourHeight, Base58.encode(newBlock.getSignature()), blockResult.name()));
newBlock.getBlockData().getHeight(), Base58.encode(newBlock.getSignature()), blockResult.name()));
return SynchronizationResult.INVALID_DATA;
}
@@ -469,7 +524,8 @@ public class Synchronizer {
// Do we need more signatures?
if (peerBlockSignatures.isEmpty()) {
int numberRequested = maxBatchHeight - ourHeight;
int numberRequested = Math.min(maxBatchHeight - ourHeight, MAXIMUM_REQUEST_SIZE);
LOGGER.trace(String.format("Requesting %d signature%s after height %d, sig %.8s",
numberRequested, (numberRequested != 1 ? "s": ""), ourHeight, Base58.encode(latestPeerSignature)));

View File

@@ -386,7 +386,7 @@ public class BitcoinACCTv1TradeBot implements AcctTradeBot {
// If it has been over 24 hours since we last updated this trade-bot entry then assume AT is never coming back
// and so wipe the trade-bot entry
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD > NTP.getTime()) {
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD < NTP.getTime()) {
LOGGER.info(() -> String.format("AT %s has been gone for too long - deleting trade-bot entry", tradeBotData.getAtAddress()));
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();

View File

@@ -384,7 +384,7 @@ public class LitecoinACCTv1TradeBot implements AcctTradeBot {
// If it has been over 24 hours since we last updated this trade-bot entry then assume AT is never coming back
// and so wipe the trade-bot entry
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD > NTP.getTime()) {
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD < NTP.getTime()) {
LOGGER.info(() -> String.format("AT %s has been gone for too long - deleting trade-bot entry", tradeBotData.getAtAddress()));
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();

View File

@@ -46,7 +46,7 @@ public class Peer {
private static final int CONNECT_TIMEOUT = 2000; // ms
/** Maximum time to wait for a message reply to arrive from peer. (ms) */
private static final int RESPONSE_TIMEOUT = 2000; // ms
private static final int RESPONSE_TIMEOUT = 3000; // ms
/**
* Interval between PING messages to a peer. (ms)
@@ -507,6 +507,7 @@ public class Peer {
}
} catch (MessageException e) {
LOGGER.warn(String.format("Failed to send %s message with ID %d to peer %s: %s", message.getType().name(), message.getId(), this, e.getMessage()));
return false;
} catch (IOException e) {
// Send failure
return false;

View File

@@ -0,0 +1,70 @@
package org.qortal.network.message;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import org.qortal.block.Block;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import com.google.common.primitives.Ints;
// This is an OUTGOING-only Message which more readily lends itself to being cached
public class CachedBlockMessage extends Message {
private Block block = null;
private byte[] cachedBytes = null;
public CachedBlockMessage(Block block) {
super(MessageType.BLOCK);
this.block = block;
}
private CachedBlockMessage(byte[] cachedBytes) {
super(MessageType.BLOCK);
this.block = null;
this.cachedBytes = cachedBytes;
}
public static Message fromByteBuffer(int id, ByteBuffer byteBuffer) throws UnsupportedEncodingException {
throw new UnsupportedOperationException("CachedBlockMessage is for outgoing messages only");
}
@Override
protected byte[] toData() {
// Already serialized?
if (this.cachedBytes != null)
return cachedBytes;
if (this.block == null)
return null;
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(this.block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(this.block));
this.cachedBytes = bytes.toByteArray();
// We no longer need source Block
// and Block contains repository handle which is highly likely to be invalid after this call
this.block = null;
return this.cachedBytes;
} catch (TransformationException | IOException e) {
return null;
}
}
public CachedBlockMessage cloneWithNewId(int newId) {
CachedBlockMessage clone = new CachedBlockMessage(this.cachedBytes);
clone.setId(newId);
return clone;
}
}

View File

@@ -931,6 +931,8 @@ public class HSQLDBRepository implements Repository {
/** Logs other HSQLDB sessions then returns passed exception */
public SQLException examineException(SQLException e) {
// TODO: could log at DEBUG for deadlocks by checking RepositoryManager.isDeadlockRelated(e)?
LOGGER.error(() -> String.format("[Session %d] HSQLDB error: %s", this.sessionId, e.getMessage()), e);
logStatements();

View File

@@ -14,11 +14,11 @@ import org.hsqldb.jdbc.HSQLDBPool;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryFactory;
import org.qortal.settings.Settings;
public class HSQLDBRepositoryFactory implements RepositoryFactory {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBRepositoryFactory.class);
private static final int POOL_SIZE = 100;
/** Log getConnection() calls that take longer than this. (ms) */
private static final long SLOW_CONNECTION_THRESHOLD = 1000L;
@@ -57,7 +57,7 @@ public class HSQLDBRepositoryFactory implements RepositoryFactory {
HSQLDBRepository.attemptRecovery(connectionUrl);
}
this.connectionPool = new HSQLDBPool(POOL_SIZE);
this.connectionPool = new HSQLDBPool(Settings.getInstance().getRepositoryConnectionPoolSize());
this.connectionPool.setUrl(this.connectionUrl);
Properties properties = new Properties();

View File

@@ -89,6 +89,8 @@ public class Settings {
private long repositoryCheckpointInterval = 60 * 60 * 1000L; // 1 hour (ms) default
/** Whether to show a notification when we perform repository 'checkpoint'. */
private boolean showCheckpointNotification = false;
/* How many blocks to cache locally. Defaulted to 10, which covers a typical Synchronizer request + a few spare */
private int blockCacheSize = 10;
/** How long to keep old, full, AT state data (ms). */
private long atStatesMaxLifetime = 2 * 7 * 24 * 60 * 60 * 1000L; // milliseconds
@@ -134,6 +136,8 @@ public class Settings {
private Long slowQueryThreshold = null;
/** Repository storage path. */
private String repositoryPath = "db";
/** Repository connection pool size. Needs to be a bit bigger than maxNetworkThreadPoolSize */
private int repositoryConnectionPoolSize = 100;
// Auto-update sources
private String[] autoUpdateRepos = new String[] {
@@ -361,6 +365,10 @@ public class Settings {
return this.maxTransactionTimestampFuture;
}
public int getBlockCacheSize() {
return this.blockCacheSize;
}
public boolean isTestNet() {
return this.isTestNet;
}
@@ -424,6 +432,10 @@ public class Settings {
return this.repositoryPath;
}
public int getRepositoryConnectionPoolSize() {
return this.repositoryConnectionPoolSize;
}
public boolean isAutoUpdateEnabled() {
return this.autoUpdateEnabled;
}

View File

@@ -0,0 +1,72 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# Italian translation by Pabs 2021
# La modifica della lingua dell'UI è fatta nel file Settings.json
#
# "localeLang": "it",
# Si prega ricordare la virgola alla fine, se questo comando non è sull'ultima riga
ADDRESS_UNKNOWN = indirizzo account sconosciuto
BLOCKCHAIN_NEEDS_SYNC = blockchain deve prima sincronizzarsi
# Blocks
BLOCK_UNKNOWN = blocco sconosciuto
BTC_BALANCE_ISSUE = saldo Bitcoin insufficiente
BTC_NETWORK_ISSUE = Bitcoin/ElectrumX problema di rete
BTC_TOO_SOON = troppo presto per trasmettere transazione Bitcoin (tempo di blocco / tempo di blocco mediano)
CANNOT_MINT = l'account non può coniare
GROUP_UNKNOWN = gruppo sconosciuto
INVALID_ADDRESS = indirizzo non valido
# Assets
INVALID_ASSET_ID = identificazione risorsa non valida
INVALID_CRITERIA = criteri di ricerca non validi
INVALID_DATA = dati non validi
INVALID_HEIGHT = altezza blocco non valida
INVALID_NETWORK_ADDRESS = indirizzo di rete non valido
INVALID_ORDER_ID = identificazione di ordine di risorsa non valida
INVALID_PRIVATE_KEY = chiave privata non valida
INVALID_PUBLIC_KEY = chiave pubblica non valida
INVALID_REFERENCE = riferimento non valido
# Validation
INVALID_SIGNATURE = firma non valida
JSON = Impossibile analizzare il messaggio JSON
NAME_UNKNOWN = nome sconosciuto
NON_PRODUCTION = questa chiamata API non è consentita per i sistemi di produzione
NO_TIME_SYNC = nessuna sincronizzazione dell'orologio ancora
ORDER_UNKNOWN = identificazione di ordine di risorsa sconosciuta
PUBLIC_KEY_NOT_FOUND = chiave pubblica non trovata
REPOSITORY_ISSUE = errore del repositorio
# This one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transazione non valida: %s (%s)
TRANSACTION_UNKNOWN = transazione sconosciuta
TRANSFORMATION_ERROR = non è stato possibile trasformare JSON in transazione
UNAUTHORIZED = Chiamata API non autorizzata

View File

@@ -0,0 +1,46 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
# Italian translation by Pabs 2021
APPLYING_UPDATE_AND_RESTARTING = Applicando aggiornamento automatico e riavviando...
AUTO_UPDATE = Aggiornamento automatico
BLOCK_HEIGHT = altezza
CHECK_TIME_ACCURACY = Controlla la precisione dell'ora
CONNECTING = Collegando
CONNECTION = connessione
CONNECTIONS = connessioni
CREATING_BACKUP_OF_DB_FILES = Creazione di backup dei file di database...
DB_BACKUP = Backup del database
DB_CHECKPOINT = Punto di controllo del database
EXIT = Uscita
MINTING_DISABLED = NON coniando
MINTING_ENABLED = \u2714 Coniando
# Nagging about lack of NTP time sync
NTP_NAG_CAPTION = L'orologio del computer è impreciso!
NTP_NAG_TEXT_UNIX = Installare servizio NTP per ottenere un orologio preciso.
NTP_NAG_TEXT_WINDOWS = Seleziona "Sincronizza orologio" dal menu per correggere.
OPEN_UI = Apri UI
PERFORMING_DB_CHECKPOINT = Salvataggio delle modifiche al database non salvate...
SYNCHRONIZE_CLOCK = Sincronizza orologio
SYNCHRONIZING_BLOCKCHAIN = Sincronizzando
SYNCHRONIZING_CLOCK = Sincronizzando orologio

View File

@@ -0,0 +1,185 @@
# Italian translation by Pabs 2021
ACCOUNT_ALREADY_EXISTS = l'account gia esiste
ACCOUNT_CANNOT_REWARD_SHARE = l'account non può fare la condivisione di ricompensa
ALREADY_GROUP_ADMIN = è già amministratore del gruppo
ALREADY_GROUP_MEMBER = è già membro del gruppo
ALREADY_VOTED_FOR_THAT_OPTION = già votato per questa opzione
ASSET_ALREADY_EXISTS = risorsa già esistente
ASSET_DOES_NOT_EXIST = risorsa non esistente
ASSET_DOES_NOT_MATCH_AT = l'asset non corrisponde all'asset di AT
ASSET_NOT_SPENDABLE = la risorsa non è spendibile
AT_ALREADY_EXISTS = AT gia esiste
AT_IS_FINISHED = AT ha finito
AT_UNKNOWN = AT sconosciuto
BANNED_FROM_GROUP = divietato dal gruppo
BAN_EXISTS = il divieto esiste già
BAN_UNKNOWN = divieto sconosciuto
BUYER_ALREADY_OWNER = l'acquirente è già proprietario
CHAT = Le transazioni CHAT non sono mai valide per l'inclusione nei blocchi
CLOCK_NOT_SYNCED = orologio non sincronizzato
DUPLICATE_OPTION = opzione duplicata
GROUP_ALREADY_EXISTS = gruppo già esistente
GROUP_APPROVAL_DECIDED = approvazione di gruppo già decisa
GROUP_APPROVAL_NOT_REQUIRED = approvazione di gruppo non richiesto
GROUP_DOES_NOT_EXIST = gruppo non esiste
GROUP_ID_MISMATCH = identificazione di gruppo non corrispondente
GROUP_OWNER_CANNOT_LEAVE = il proprietario del gruppo non può lasciare il gruppo
HAVE_EQUALS_WANT = la risorsa avere è uguale a la risorsa volere
INCORRECT_NONCE = PoW nonce sbagliato
INSUFFICIENT_FEE = tariffa insufficiente
INVALID_ADDRESS = indirizzo non valido
INVALID_AMOUNT = importo non valido
INVALID_ASSET_OWNER = proprietario della risorsa non valido
INVALID_AT_TRANSACTION = transazione AT non valida
INVALID_AT_TYPE_LENGTH = lunghezza di "tipo" AT non valida
INVALID_CREATION_BYTES = byte di creazione non validi
INVALID_DATA_LENGTH = lunghezza di dati non valida
INVALID_DESCRIPTION_LENGTH = lunghezza della descrizione non valida
INVALID_GROUP_APPROVAL_THRESHOLD = soglia di approvazione del gruppo non valida
INVALID_GROUP_BLOCK_DELAY = ritardo del blocco di approvazione del gruppo non valido
INVALID_GROUP_ID = identificazione di gruppo non valida
INVALID_GROUP_OWNER = proprietario di gruppo non valido
INVALID_LIFETIME = durata della vita non valida
INVALID_NAME_LENGTH = lunghezza del nome non valida
INVALID_NAME_OWNER = proprietario del nome non valido
INVALID_OPTIONS_COUNT = conteggio di opzioni non validi
INVALID_OPTION_LENGTH = lunghezza di opzioni non valida
INVALID_ORDER_CREATOR = creatore dell'ordine non valido
INVALID_PAYMENTS_COUNT = conteggio pagamenti non validi
INVALID_PUBLIC_KEY = chiave pubblica non valida
INVALID_QUANTITY = quantità non valida
INVALID_REFERENCE = riferimento non valido
INVALID_RETURN = ritorno non valido
INVALID_REWARD_SHARE_PERCENT = percentuale condivisione di ricompensa non valida
INVALID_SELLER = venditore non valido
INVALID_TAGS_LENGTH = lunghezza dei "tag" non valida
INVALID_TX_GROUP_ID = identificazione di gruppo di transazioni non valida
INVALID_VALUE_LENGTH = lunghezza "valore" non valida
INVITE_UNKNOWN = invito di gruppo sconosciuto
JOIN_REQUEST_EXISTS = la richiesta di iscrizione al gruppo già esiste
MAXIMUM_REWARD_SHARES = numero massimo di condivisione di ricompensa raggiunto per l'account
MISSING_CREATOR = creatore mancante
MULTIPLE_NAMES_FORBIDDEN = è vietata la registrazione di multipli nomi per account
NAME_ALREADY_FOR_SALE = nome già in vendita
NAME_ALREADY_REGISTERED = nome già registrato
NAME_DOES_NOT_EXIST = il nome non esiste
NAME_NOT_FOR_SALE = il nome non è in vendita
NAME_NOT_NORMALIZED = il nome non è in forma "normalizzata" Unicode
NEGATIVE_AMOUNT = importo non valido / negativo
NEGATIVE_FEE = tariffa non valida / negativa
NEGATIVE_PRICE = prezzo non valido / negativo
NOT_GROUP_ADMIN = l'account non è un amministratore di gruppo
NOT_GROUP_MEMBER = l'account non è un membro del gruppo
NOT_MINTING_ACCOUNT = l'account non può coniare
NOT_YET_RELEASED = funzione non ancora rilasciata
NO_BALANCE = equilibrio insufficiente
NO_BLOCKCHAIN_LOCK = nodo di blockchain attualmente occupato
NO_FLAG_PERMISSION = l'account non dispone di questa autorizzazione
OK = OK
ORDER_ALREADY_CLOSED = l'ordine di scambio di risorsa è già chiuso
ORDER_DOES_NOT_EXIST = l'ordine di scambio di risorsa non esiste
POLL_ALREADY_EXISTS = il sondaggio già esiste
POLL_DOES_NOT_EXIST = il sondaggio non esiste
POLL_OPTION_DOES_NOT_EXIST = le opzioni di sondaggio non esistono
PUBLIC_KEY_UNKNOWN = chiave pubblica sconosciuta
REWARD_SHARE_UNKNOWN = condivisione di ricompensa sconosciuta
SELF_SHARE_EXISTS = condivisione di sé (condivisione di ricompensa) già esiste
TIMESTAMP_TOO_NEW = timestamp troppo nuovo
TIMESTAMP_TOO_OLD = timestamp troppo vecchio
TOO_MANY_UNCONFIRMED = l'account ha troppe transazioni non confermate in sospeso
TRANSACTION_ALREADY_CONFIRMED = la transazione è già confermata
TRANSACTION_ALREADY_EXISTS = la transazione già esiste
TRANSACTION_UNKNOWN = transazione sconosciuta
TX_GROUP_ID_MISMATCH = identificazione di gruppo della transazione non corrisponde