Compare commits

...

72 Commits

Author SHA1 Message Date
CalDescent
90e8cfc737 qoraHoldersShare reworked to qoraHoldersShareByHeight.
This allows the QORA share percentage to be modified at different heights, based on community votes. Added unit test to simulate a reduction.
2022-07-08 11:12:58 +01:00
CalDescent
57bd3c3459 Merge remote-tracking branch 'catbref/auto-update-fix' 2022-07-07 18:48:39 +01:00
CalDescent
ad0d8fac91 Bump version to 3.4.1 2022-07-05 20:56:40 +01:00
CalDescent
a8b58d2007 Reward share limit activation timestamp set to 1657382400000 (Sat Jul 09 2022 16:00:00 UTC) 2022-07-05 20:34:23 +01:00
CalDescent
a099ecf55b Merge branch 'reduce-reward-shares' 2022-07-04 19:58:48 +01:00
CalDescent
6b91b0477d Added version query string param to /blocks/signature/{signature}/data API endpoint, to allow for optional V2 block serialization (with a single combined AT states hash).
Version can only be specified when querying unarchived blocks; archived blocks require V1 for now (and possibly V2 in the future).
2022-07-04 19:57:54 +01:00
CalDescent
d7e7c1f48c Fixed bugs from merge conflict, causing incorrect systray statuses in some cases. 2022-07-02 10:33:00 +01:00
CalDescent
7c5932a512 GET /admin/status now returns online account submission status for "isMintingPossible", instead of BlockMinter status.
Online account credit is a more useful definition of "minting" than block signing, from the user's perspective. Should bring UI minting/syncing status in line with the core's systray status.
2022-07-01 17:29:15 +01:00
CalDescent
610a3fcf83 Improved order in getNodeType() 2022-07-01 16:48:57 +01:00
CalDescent
b329dc41bc Updated incorrect ONLINE_ACCOUNTS_V3_PEER_VERSION to 3.4.0 2022-07-01 13:36:56 +01:00
CalDescent
ef249066cd Updated another reference of SimpleTransaction::getTimestamp 2022-07-01 13:13:55 +01:00
CalDescent
ca7d58c272 SimpleTransaction.timestamp is now in milliseconds instead of seconds.
Should fix 1970 timestamp issue in UI for foreign transactions, and also maintains consistency with QORT wallet transactions.
2022-07-01 12:46:20 +01:00
CalDescent
08f3351a7a Reward share transaction modifications:
- Reduce concurrent reward share limit from 6 to 3 (or from 5 to 2 when including self share) - as per community vote.
- Founders remain at 6 (5 when including self share) - also decided in community vote.
- When all slots are being filled, require that at least one is a self share, so that not all can be used for sponsorship.
- Activates at future undecided timestamp.
2022-07-01 12:18:48 +01:00
QuickMythril
f499ada94c Merge pull request #91 from qortish/master
Update SysTray_sv.properties
2022-06-29 08:44:47 -04:00
qortish
f073040c06 Update SysTray_sv.properties
proper
2022-06-29 14:04:27 +02:00
CalDescent
49bfb43bd2 Updated AdvancedInstaller project for v3.4.0 2022-06-28 22:56:11 +01:00
CalDescent
425c70719c Bump version to 3.4.0 2022-06-28 19:29:09 +01:00
CalDescent
1420aea600 aggregateSignatureTimestamp set to 1656864000000 (Sun Jul 03 2022 16:00:00 UTC) 2022-06-28 19:26:00 +01:00
CalDescent
4543062700 Updated blockchain.json files in unit tests to include an already active "aggregateSignatureTimestamp" 2022-06-28 19:22:54 +01:00
CalDescent
722468a859 Restrict relays to v3.4.0 peers and above, in attempt to avoid bugs causing older peers to break relay chains. 2022-06-27 19:38:30 +01:00
CalDescent
492a9ed3cf Fixed more message rebroadcasts that were missing IDs. 2022-06-26 20:02:08 +01:00
CalDescent
420b577606 No longer adding inferior chain signatures in comparePeers() as it doesn't seem 100% reliable in some cases. It's better to re-check weights on each pass. 2022-06-26 18:24:33 +01:00
CalDescent
434038fd12 Reduced online accounts log spam 2022-06-26 16:34:04 +01:00
CalDescent
a9b154b783 Modified BlockMinter.higherWeightChainExists() so that it checks for invalid blocks before treating a chain as higher weight. Otherwise minting is slowed down when a higher weight but invalid chain exists on the network (e.g. after a hard fork). 2022-06-26 15:54:41 +01:00
CalDescent
a01652b816 Removed hasInvalidBlock filtering, as this was unnecessary risk now that the original bug in comparePeers() is fixed. 2022-06-26 10:09:24 +01:00
CalDescent
4440e82bb9 Fixed long term bug in comparePeers() causing peers with invalid blocks to prevent alternate valid but lower weight candidates from being chosen. 2022-06-25 16:34:42 +01:00
CalDescent
a2e1efab90 Synchronize hasInvalidBlock predicate, as it wasn't thread safe 2022-06-25 14:12:21 +01:00
CalDescent
7e1ce38f0a Fixed major bug in hasInvalidBlock predicate 2022-06-25 14:11:25 +01:00
CalDescent
a93bae616e Invalid signatures are now stored as ByteArray instead of String, to avoid regular Base58 encoding and decoding, which is very inefficient. 2022-06-25 13:29:53 +01:00
CalDescent
a2568936a0 Synchronizer: filter out peers reporting to hold invalid block signatures.
We already mark peers as misbehaved if they returned invalid signatures, but this wasn't sufficient when multiple copies of the same invalid block exist on the network (e.g. after a hard fork). In these cases, we need to be more proactive to avoid syncing with these peers, to increase the chances of preserving other candidate blocks.
2022-06-25 12:45:19 +01:00
CalDescent
23408827b3 Merge remote-tracking branch 'catbref/schnorr-agg-BlockMinter-fix' into schnorr-agg-BlockMinter-fix
# Conflicts:
#	src/main/java/org/qortal/block/BlockChain.java
#	src/main/java/org/qortal/controller/OnlineAccountsManager.java
#	src/main/java/org/qortal/network/message/BlockV2Message.java
#	src/main/resources/blockchain.json
#	src/test/resources/test-chain-v2.json
2022-06-24 11:47:58 +01:00
CalDescent
ae6e2fab6f Rewrite of isNotOldPeer predicate, to fix logic issue (second attempt - first had too many issues)
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.

On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
2022-06-24 10:36:06 +01:00
CalDescent
3af36644c0 Revert "Rewrite of isNotOldPeer predicate, to fix logic issue."
This reverts commit d81071f254.
2022-06-24 10:26:39 +01:00
CalDescent
db8f627f1a Default minPeerVersion set to 3.3.7 2022-06-24 10:13:55 +01:00
CalDescent
5db0fa080b Prune peers every 5 minutes instead of every cycle of the Controller thread.
This should reduce the amount of time the allKnownPeers lock is held.
2022-06-24 10:13:36 +01:00
CalDescent
d81071f254 Rewrite of isNotOldPeer predicate, to fix logic issue.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.

On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
2022-06-24 10:11:46 +01:00
QuickMythril
ba148dfd88 Added Korean translations
credit: TL (Discord username)
2022-06-23 02:35:42 -04:00
CalDescent
dbcb457a04 Merge branch 'master' of github.com:Qortal/qortal 2022-06-20 22:51:33 +01:00
CalDescent
b00e1c8f47 Allow online account submission in all cases when in recovery mode. 2022-06-20 22:50:41 +01:00
CalDescent
899a6eb104 Rework of systray statuses
- Show "Minting" as long as online accounts are submitted to the network (previously it related to block signing).
- Fixed bug causing it to regularly show "Synchronizing 100%".
- Only show "Synchronizing" if the chain falls more than 2 hours behind - anything less is unnecessary noise.
2022-06-20 22:48:32 +01:00
CalDescent
6e556c82a3 Updated AdvancedInstaller project for v3.3.7 2022-06-20 22:25:40 +01:00
CalDescent
35ce64cc3a Bump version to 3.3.7 2022-06-20 21:52:41 +01:00
CalDescent
09b218d16c Merge branch 'master' of github.com:Qortal/qortal 2022-06-20 21:51:39 +01:00
CalDescent
7ea451e027 Allow trades to be initiated, and QDN data to be published, as long as the latest block is within 60 minutes of the current time. Again this should remove negative effects of larger re-orgs from the UX. 2022-06-20 21:26:48 +01:00
CalDescent
ffb27c3946 Further relaxed min latest block timestamp age to be considered "up to date" in a few places, from 30 to 60 mins. This should help reduce the visible effects of larger re-orgs if they happen again. 2022-06-20 21:25:14 +01:00
QuickMythril
6e7d2b50a0 Added Romanian translations
credit: Ovidiu (Telegram username)
2022-06-20 01:27:28 -04:00
QuickMythril
bd025f30ff Updated ApiError German translations
credit: CD (Discord username)
2022-06-20 00:56:19 -04:00
CalDescent
c6cbd8e826 Revert "Sync behaviour changes:"
This reverts commit 8a76c6c0de.
2022-06-19 19:10:37 +01:00
CalDescent
b85afe3ca7 Revert "Keep existing findCommonBlocksWithPeers() and comparePeers() behaviour prior to consensus switchover, to reduce the number of variables."
This reverts commit fecfac5ad9.
2022-06-19 19:10:30 +01:00
CalDescent
5a4674c973 Revert "newConsensusTimestamp set to Sun Jun 19 2022 16:00:00 UTC"
This reverts commit 55a0c10855.
2022-06-19 19:09:57 +01:00
CalDescent
769418e5ae Revert "Fixed unit tests due to missing feature trigger"
This reverts commit 28f9df7178.
2022-06-19 19:09:52 +01:00
CalDescent
38faed5799 Don't submit online accounts if node is more than 2 hours of sync. 2022-06-19 18:56:08 +01:00
catbref
10a578428b Improve intermittent auto-update failures, mostly under non-Windows environments.
Symptoms are:
* AutoUpdate trying to run new ApplyUpdate process, but nothing appears in log-apply-update.?.txt
* Main qortal.jar process continues to run without updating
* Last AutoUpdate line in log.txt.? is:
	2022-06-18 15:42:46 INFO  AutoUpdate:258 - Applying update with: /usr/local/openjdk11/bin/java -Djava.net.preferIPv4Stack=false -Xss256k -Xmx1024m -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=127.0.0.1:5005 -cp new-qortal.jar org.qortal.ApplyUpdate

Changes are:
* child process now inherits parent's stdout / stderr (was piped from parent)
* child process is given a fresh stdin, which is immediately closed
* AutoUpdate now converts -agentlib JVM arg to -DQORTAL_agentlib
* ApplyUpdate converts -DQORTAL_agentlib to -agentlib

The latter two changes are to prevent a conflict where two processes try to reuse the same JVM debugging port number.
2022-06-19 18:25:00 +01:00
CalDescent
96cdf4a87e Updated AdvancedInstaller project for v3.3.6 2022-06-19 11:41:50 +01:00
catbref
431cbf01af BlockMinter will discard block candidates that turn out to be invalid just prior to adding transactions, to be potentially reminted in the next pass 2022-06-16 17:47:08 +01:00
catbref
4eb58d3591 BlockTimestampTests to show results from changing blockTimingsByHeight 2022-06-04 12:36:36 +01:00
catbref
8d8e58a905 Network$NetworkProcessor now has its own LOGGER 2022-06-04 12:36:36 +01:00
catbref
8f58da4f52 OnlineAccountsManager:
Bump v3 min peer version from 3.2.203 to 3.3.203
No need for toOnlineAccountTimestamp(long) as we only ever use getCurrentOnlineAccountTimestamp().
Latter now returns Long and does the call to NTP.getTime() on behalf of caller, removing duplicated NTP.getTime() calls and null checks in multiple callers.
Add aggregate-signature feature-trigger timestamp threshold checks where needed, near sign() and verify() calls.
Improve logging - but some logging will need to be removed / reduced before merging.
2022-06-04 12:36:36 +01:00
catbref
a4e2aedde1 Remove debug-hindering "final" modifier from effectively final locals 2022-06-04 12:36:36 +01:00
catbref
24d04fe928 Block.mint() always uses latest timestamped online accounts 2022-06-04 12:36:35 +01:00
catbref
0cf32f6c5e BlockMinter now only acquires repository instance as needed to prevent long HSQLDB rollbacks 2022-06-04 12:35:54 +01:00
catbref
84d850ee0b WIP: use blockchain feature-trigger "aggregateSignatureTimestamp" to determine when online-accounts sigs and block sigs switch to aggregate sigs 2022-06-04 12:35:51 +01:00
catbref
51930d3ccf Move some private key methods to Crypto class 2022-06-04 12:35:15 +01:00
catbref
c5e5316f2e Schnorr public key and signature aggregation for 'online accounts'.
Aggregated signature should reduce block payload significantly,
as well as associated network, memory & CPU loads.

org.qortal.crypto.BouncyCastle25519 renamed to Qortal25519Extras.
Our class provides additional features such as DH-based shared secret,
aggregating public keys & signatures and sign/verify for aggregate use.

BouncyCastle's Ed25519 class copied in as BouncyCastleEd25519,
but with 'private' modifiers changed to 'protected',
to allow extension by our Qortal25519Extras class,
and to avoid lots of messy reflection-based calls.
2022-06-04 12:35:15 +01:00
catbref
829ab1eb37 Cherry-pick minor fixes from another branch to resolve "No online accounts - not even our own?" issues 2022-06-04 12:35:03 +01:00
catbref
d9b330b46a OnlineAccountData no longer uses signature in equals() or hashCode() because newer aggregate signatures use random nonces and OAD class doesn't care about / verify sigs 2022-06-04 10:49:59 +01:00
catbref
c032b92d0d Logging fix: size() was called on wrong collection, leading to confusing logging output 2022-06-04 10:49:59 +01:00
catbref
ae92a6eed4 OnlineAccountsV3: slightly rework Block.mint() so it doesn't need to filter so many online accounts
Slight optimization to BlockMinter by adding OnlineAccountsManager.hasOnlineAccounts():boolean instead of returning actual data, only to call isEmpty()!
2022-06-04 10:49:59 +01:00
catbref
712c4463f7 OnlineAccountsV3:
Move online account cache code from Block into OnlineAccountsManager, simplifying Block code and removing duplicated caches from Block also.
This tidies up those remaining set-based getters in OnlineAccountsManager.
No need for currentOnlineAccountsHashes's inner Map to be sorted so addAccounts() creates new ConcurentHashMap insteaad of ConcurrentSkipListMap.

Changed GetOnlineAccountsV3Message to use a single byte for count of hashes as it can only be 1 to 256.
256 is represented by 0.

Comments tidy-up.
Change v3 broadcast interval from 10s to 15s.
2022-06-04 10:49:59 +01:00
catbref
fbdc1e1cdb OnlineAccountsV3:
Adding support for GET_ONLINE_ACCOUNTS_V3 to Controller, which calls OnlineAccountsManager.

With OnlineAccountsV3, instead of nodes sending their list of known online accounts (public keys),
nodes now send a summary which contains hashes of known online accounts, one per timestamp + leading-byte combo.
Thus outgoing messages are much smaller and scale better with more users.
Remote peers compare the hashes and send back lists of online accounts (for that timestamp + leading-byte combo) where hashes do not match.

Massive rewrite of OnlineAccountsManager to maintain online accounts.
Now there are three caches:
1. all online accounts, but split into sets by timestamp
2. 'hashes' of all online accounts, one hash per timestamp+leading-byte combination
Mainly for efficient use by GetOnlineAccountsV3 message constructor.
3. online accounts for the highest blocks on our chain to speed up block processing
Note that highest blocks might be way older than 'current' blocks if we're somewhat behind in syncing.

Other OnlineAccountsManager changes:
* Use scheduling executor service to manage subtasks
* Switch from 'synchronized' to 'concurrent' collections
* Generally switch from Lists to Sets - requires improved OnlineAccountData.hashCode() - further work needed
* Only send V3 messages to peers with version >= 3.2.203 (for testing)
* More info on which online accounts lists are returned depending on use-cases

To test, change your peer's version (in pom.xml?) to v3.2.203.
2022-06-04 10:49:59 +01:00
catbref
f2060fe7a1 Initial work on online-accounts-v3 network messages to drastically reduce network load.
Lots of TODOs to action.
2022-06-04 10:49:59 +01:00
catbref
6950c6bf69 Initial work on reducing network load for transferring blocks.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.

Set min peer version to 3.3.203 in BlockV2Message class.
2022-06-04 10:49:11 +01:00
68 changed files with 4655 additions and 1034 deletions

View File

@@ -17,10 +17,10 @@
<ROW Property="Manufacturer" Value="Qortal"/>
<ROW Property="MsiLogging" MultiBuildValue="DefaultBuild:vp"/>
<ROW Property="NTP_GOOD" Value="false"/>
<ROW Property="ProductCode" Value="1033:{7DED0630-60A3-438A-B857-D95BD16213F1} 1049:{4A7BAAA1-E9EC-4E92-8963-34420EC1E3F4} 2052:{F3448469-4E4F-4600-AE05-DF7669B267E3} 2057:{0B980BC5-4C80-4A98-A4D2-B32D134AA276} " Type="16"/>
<ROW Property="ProductCode" Value="1033:{B786B6C1-86FA-4917-BAF9-7C9D10959D66} 1049:{60881A63-53FC-4DBE-AF3B-0568F55D2150} 2052:{108D1268-8111-49B9-B768-CC0A0A0CEDE1} 2057:{46DB692E-D942-40D5-B32E-FB94458478BF} " Type="16"/>
<ROW Property="ProductLanguage" Value="2057"/>
<ROW Property="ProductName" Value="Qortal"/>
<ROW Property="ProductVersion" Value="3.3.5" Type="32"/>
<ROW Property="ProductVersion" Value="3.4.0" Type="32"/>
<ROW Property="RECONFIG_NTP" Value="true"/>
<ROW Property="REMOVE_BLOCKCHAIN" Value="YES" Type="4"/>
<ROW Property="REPAIR_BLOCKCHAIN" Value="YES" Type="4"/>
@@ -212,7 +212,7 @@
<ROW Component="ADDITIONAL_LICENSE_INFO_71" ComponentId="{12A3ADBE-BB7A-496C-8869-410681E6232F}" Directory_="jdk.zipfs_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_71" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_8" ComponentId="{D53AD95E-CF96-4999-80FC-5812277A7456}" Directory_="java.naming_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_8" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_9" ComponentId="{6B7EA9B0-5D17-47A8-B78C-FACE86D15E01}" Directory_="java.net.http_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_9" Type="0"/>
<ROW Component="AI_CustomARPName" ComponentId="{191AD445-72DF-4850-BB4A-FE92D4B62BCF}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_CustomARPName" ComponentId="{D57E945C-0FFB-447C-ADF7-2253CEBF4C0C}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_ExePath" ComponentId="{3644948D-AE0B-41BB-9FAF-A79E70490A08}" Directory_="APPDIR" Attributes="260" KeyPath="AI_ExePath"/>
<ROW Component="APPDIR" ComponentId="{680DFDDE-3FB4-47A5-8FF5-934F576C6F91}" Directory_="APPDIR" Attributes="0"/>
<ROW Component="AccessBridgeCallbacks.h" ComponentId="{288055D1-1062-47A3-AA44-5601B4E38AED}" Directory_="bridge_Dir" Attributes="0" KeyPath="AccessBridgeCallbacks.h" Type="0"/>

View File

@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>3.3.6</version>
<version>3.4.1</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>

View File

@@ -8,6 +8,7 @@ import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.security.Security;
import java.util.*;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -18,6 +19,8 @@ import org.qortal.api.ApiRequest;
import org.qortal.controller.AutoUpdate;
import org.qortal.settings.Settings;
import static org.qortal.controller.AutoUpdate.AGENTLIB_JVM_HOLDER_ARG;
public class ApplyUpdate {
static {
@@ -197,6 +200,11 @@ public class ApplyUpdate {
// JVM arguments
javaCmd.addAll(ManagementFactory.getRuntimeMXBean().getInputArguments());
// Reapply any retained, but disabled, -agentlib JVM arg
javaCmd = javaCmd.stream()
.map(arg -> arg.replace(AGENTLIB_JVM_HOLDER_ARG, "-agentlib"))
.collect(Collectors.toList());
// Call mainClass in JAR
javaCmd.addAll(Arrays.asList("-jar", JAR_FILENAME));
@@ -205,7 +213,7 @@ public class ApplyUpdate {
}
try {
LOGGER.info(() -> String.format("Restarting node with: %s", String.join(" ", javaCmd)));
LOGGER.info(String.format("Restarting node with: %s", String.join(" ", javaCmd)));
ProcessBuilder processBuilder = new ProcessBuilder(javaCmd);
@@ -214,8 +222,15 @@ public class ApplyUpdate {
processBuilder.environment().put(JAVA_TOOL_OPTIONS_NAME, JAVA_TOOL_OPTIONS_VALUE);
}
processBuilder.start();
} catch (IOException e) {
// New process will inherit our stdout and stderr
processBuilder.redirectOutput(ProcessBuilder.Redirect.INHERIT);
processBuilder.redirectError(ProcessBuilder.Redirect.INHERIT);
Process process = processBuilder.start();
// Nothing to pipe to new process, so close output stream (process's stdin)
process.getOutputStream().close();
} catch (Exception e) {
LOGGER.error(String.format("Failed to restart node (BAD): %s", e.getMessage()));
}
}

View File

@@ -11,15 +11,15 @@ public class PrivateKeyAccount extends PublicKeyAccount {
private final Ed25519PrivateKeyParameters edPrivateKeyParams;
/**
* Create PrivateKeyAccount using byte[32] seed.
* Create PrivateKeyAccount using byte[32] private key.
*
* @param seed
* @param privateKey
* byte[32] used to create private/public key pair
* @throws IllegalArgumentException
* if passed invalid seed
* if passed invalid privateKey
*/
public PrivateKeyAccount(Repository repository, byte[] seed) {
this(repository, new Ed25519PrivateKeyParameters(seed, 0));
public PrivateKeyAccount(Repository repository, byte[] privateKey) {
this(repository, new Ed25519PrivateKeyParameters(privateKey, 0));
}
private PrivateKeyAccount(Repository repository, Ed25519PrivateKeyParameters edPrivateKeyParams) {
@@ -37,10 +37,6 @@ public class PrivateKeyAccount extends PublicKeyAccount {
return this.privateKey;
}
public static byte[] toPublicKey(byte[] seed) {
return new Ed25519PrivateKeyParameters(seed, 0).generatePublicKey().getEncoded();
}
public byte[] sign(byte[] message) {
return Crypto.sign(this.edPrivateKeyParams, message);
}

View File

@@ -4,6 +4,7 @@ import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.controller.Synchronizer;
import org.qortal.network.Network;
@@ -21,7 +22,7 @@ public class NodeStatus {
public final int height;
public NodeStatus() {
this.isMintingPossible = Controller.getInstance().isMintingPossible();
this.isMintingPossible = OnlineAccountsManager.getInstance().hasActiveOnlineAccountSignatures();
this.syncPercent = Synchronizer.getInstance().getSyncPercent();
this.isSynchronizing = Synchronizer.getInstance().isSynchronizing();

View File

@@ -125,12 +125,12 @@ public class AdminResource {
}
private String getNodeType() {
if (Settings.getInstance().isTopOnly()) {
return "topOnly";
}
else if (Settings.getInstance().isLite()) {
if (Settings.getInstance().isLite()) {
return "lite";
}
else if (Settings.getInstance().isTopOnly()) {
return "topOnly";
}
else {
return "full";
}

View File

@@ -57,6 +57,7 @@ import org.qortal.transform.TransformationException;
import org.qortal.transform.transaction.ArbitraryTransactionTransformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import org.qortal.utils.ZipUtils;
@Path("/arbitrary")
@@ -1099,7 +1100,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, error);
}
if (!Controller.getInstance().isUpToDate()) {
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp)) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);
}

View File

@@ -114,7 +114,7 @@ public class BlocksResource {
@Path("/signature/{signature}/data")
@Operation(
summary = "Fetch serialized, base58 encoded block data using base58 signature",
description = "Returns serialized data for the block that matches the given signature",
description = "Returns serialized data for the block that matches the given signature, and an optional block serialization version",
responses = {
@ApiResponse(
description = "the block data",
@@ -125,7 +125,7 @@ public class BlocksResource {
@ApiErrors({
ApiError.INVALID_SIGNATURE, ApiError.BLOCK_UNKNOWN, ApiError.INVALID_DATA, ApiError.REPOSITORY_ISSUE
})
public String getSerializedBlockData(@PathParam("signature") String signature58) {
public String getSerializedBlockData(@PathParam("signature") String signature58, @QueryParam("version") Integer version) {
// Decode signature
byte[] signature;
try {
@@ -136,20 +136,41 @@ public class BlocksResource {
try (final Repository repository = RepositoryManager.getRepository()) {
// Default to version 1
if (version == null) {
version = 1;
}
// Check the database first
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
switch (version) {
case 1:
bytes.write(BlockTransformer.toBytes(block));
break;
case 2:
bytes.write(BlockTransformer.toBytesV2(block));
break;
default:
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
return Base58.encode(bytes.toByteArray());
}
// Not found, so try the block archive
byte[] bytes = BlockArchiveReader.getInstance().fetchSerializedBlockBytesForSignature(signature, false, repository);
if (bytes != null) {
return Base58.encode(bytes);
if (version != 1) {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Archived blocks require version 1");
}
return Base58.encode(bytes);
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);

View File

@@ -42,6 +42,7 @@ import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
@Path("/crosschain/tradebot")
@Tag(name = "Cross-Chain (Trade-Bot)")
@@ -137,7 +138,8 @@ public class CrossChainTradeBotResource {
if (tradeBotCreateRequest.qortAmount <= 0 || tradeBotCreateRequest.fundingQortAmount <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ORDER_SIZE_TOO_SMALL);
if (!Controller.getInstance().isUpToDate())
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -198,7 +200,8 @@ public class CrossChainTradeBotResource {
if (tradeBotRespondRequest.receivingAddress == null || !Crypto.isValidAddress(tradeBotRespondRequest.receivingAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
if (!Controller.getInstance().isUpToDate())
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);
// Extract data from cross-chain trading AT

View File

@@ -723,9 +723,9 @@ public class TransactionsResource {
ApiError.BLOCKCHAIN_NEEDS_SYNC, ApiError.INVALID_SIGNATURE, ApiError.INVALID_DATA, ApiError.TRANSFORMATION_ERROR, ApiError.REPOSITORY_ISSUE
})
public String processTransaction(String rawBytes58) {
// Only allow a transaction to be processed if our latest block is less than 30 minutes old
// Only allow a transaction to be processed if our latest block is less than 60 minutes old
// If older than this, we should first wait until the blockchain is synced
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);

View File

@@ -10,6 +10,7 @@ import java.math.BigInteger;
import java.math.RoundingMode;
import java.nio.charset.StandardCharsets;
import java.text.DecimalFormat;
import java.text.MessageFormat;
import java.text.NumberFormat;
import java.util.*;
import java.util.stream.Collectors;
@@ -27,6 +28,7 @@ import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.block.BlockChain.AccountLevelShareBin;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.crypto.Crypto;
import org.qortal.crypto.Qortal25519Extras;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AccountData;
import org.qortal.data.account.EligibleQoraHolderData;
@@ -221,11 +223,10 @@ public class Block {
return accountAmount;
}
}
/** Always use getExpandedAccounts() to access this, as it's lazy-instantiated. */
private List<ExpandedAccount> cachedExpandedAccounts = null;
/** Opportunistic cache of this block's valid online accounts. Only created by call to isValid(). */
private List<OnlineAccountData> cachedValidOnlineAccounts = null;
/** Opportunistic cache of this block's valid online reward-shares. Only created by call to isValid(). */
private List<RewardShareData> cachedOnlineRewardShares = null;
@@ -347,18 +348,21 @@ public class Block {
int version = parentBlock.getNextBlockVersion();
byte[] reference = parentBlockData.getSignature();
// Fetch our list of online accounts
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
if (onlineAccounts.isEmpty()) {
LOGGER.error("No online accounts - not even our own?");
// Qortal: minter is always a reward-share, so find actual minter and get their effective minting level
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, minter.getPublicKey());
if (minterLevel == 0) {
LOGGER.error("Minter effective level returned zero?");
return null;
}
// Find newest online accounts timestamp
long onlineAccountsTimestamp = 0;
for (OnlineAccountData onlineAccountData : onlineAccounts) {
if (onlineAccountData.getTimestamp() > onlineAccountsTimestamp)
onlineAccountsTimestamp = onlineAccountData.getTimestamp();
long timestamp = calcTimestamp(parentBlockData, minter.getPublicKey(), minterLevel);
long onlineAccountsTimestamp = OnlineAccountsManager.getCurrentOnlineAccountTimestamp();
// Fetch our list of online accounts
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts(onlineAccountsTimestamp);
if (onlineAccounts.isEmpty()) {
LOGGER.error("No online accounts - not even our own?");
return null;
}
// Load sorted list of reward share public keys into memory, so that the indexes can be obtained.
@@ -369,10 +373,6 @@ public class Block {
// Map using index into sorted list of reward-shares as key
Map<Integer, OnlineAccountData> indexedOnlineAccounts = new HashMap<>();
for (OnlineAccountData onlineAccountData : onlineAccounts) {
// Disregard online accounts with different timestamps
if (onlineAccountData.getTimestamp() != onlineAccountsTimestamp)
continue;
Integer accountIndex = getRewardShareIndex(onlineAccountData.getPublicKey(), allRewardSharePublicKeys);
if (accountIndex == null)
// Online account (reward-share) with current timestamp but reward-share cancelled
@@ -389,26 +389,29 @@ public class Block {
byte[] encodedOnlineAccounts = BlockTransformer.encodeOnlineAccounts(onlineAccountsSet);
int onlineAccountsCount = onlineAccountsSet.size();
// Concatenate online account timestamp signatures (in correct order)
byte[] onlineAccountsSignatures = new byte[onlineAccountsCount * Transformer.SIGNATURE_LENGTH];
for (int i = 0; i < onlineAccountsCount; ++i) {
Integer accountIndex = accountIndexes.get(i);
OnlineAccountData onlineAccountData = indexedOnlineAccounts.get(accountIndex);
System.arraycopy(onlineAccountData.getSignature(), 0, onlineAccountsSignatures, i * Transformer.SIGNATURE_LENGTH, Transformer.SIGNATURE_LENGTH);
byte[] onlineAccountsSignatures;
if (timestamp >= BlockChain.getInstance().getAggregateSignatureTimestamp()) {
// Collate all signatures
Collection<byte[]> signaturesToAggregate = indexedOnlineAccounts.values()
.stream()
.map(OnlineAccountData::getSignature)
.collect(Collectors.toList());
// Aggregated, single signature
onlineAccountsSignatures = Qortal25519Extras.aggregateSignatures(signaturesToAggregate);
} else {
// Concatenate online account timestamp signatures (in correct order)
onlineAccountsSignatures = new byte[onlineAccountsCount * Transformer.SIGNATURE_LENGTH];
for (int i = 0; i < onlineAccountsCount; ++i) {
Integer accountIndex = accountIndexes.get(i);
OnlineAccountData onlineAccountData = indexedOnlineAccounts.get(accountIndex);
System.arraycopy(onlineAccountData.getSignature(), 0, onlineAccountsSignatures, i * Transformer.SIGNATURE_LENGTH, Transformer.SIGNATURE_LENGTH);
}
}
byte[] minterSignature = minter.sign(BlockTransformer.getBytesForMinterSignature(parentBlockData,
minter.getPublicKey(), encodedOnlineAccounts));
// Qortal: minter is always a reward-share, so find actual minter and get their effective minting level
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, minter.getPublicKey());
if (minterLevel == 0) {
LOGGER.error("Minter effective level returned zero?");
return null;
}
long timestamp = calcTimestamp(parentBlockData, minter.getPublicKey(), minterLevel);
int transactionCount = 0;
byte[] transactionsSignature = null;
int height = parentBlockData.getHeight() + 1;
@@ -1013,49 +1016,59 @@ public class Block {
if (this.blockData.getOnlineAccountsSignatures() == null || this.blockData.getOnlineAccountsSignatures().length == 0)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MISSING;
if (this.blockData.getOnlineAccountsSignatures().length != onlineRewardShares.size() * Transformer.SIGNATURE_LENGTH)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
if (this.blockData.getTimestamp() >= BlockChain.getInstance().getAggregateSignatureTimestamp()) {
// We expect just the one, aggregated signature
if (this.blockData.getOnlineAccountsSignatures().length != Transformer.SIGNATURE_LENGTH)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
} else {
if (this.blockData.getOnlineAccountsSignatures().length != onlineRewardShares.size() * Transformer.SIGNATURE_LENGTH)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
}
// Check signatures
long onlineTimestamp = this.blockData.getOnlineAccountsTimestamp();
byte[] onlineTimestampBytes = Longs.toByteArray(onlineTimestamp);
// If this block is much older than current online timestamp, then there's no point checking current online accounts
List<OnlineAccountData> currentOnlineAccounts = onlineTimestamp < NTP.getTime() - OnlineAccountsManager.ONLINE_TIMESTAMP_MODULUS
? null
: OnlineAccountsManager.getInstance().getOnlineAccounts();
List<OnlineAccountData> latestBlocksOnlineAccounts = OnlineAccountsManager.getInstance().getLatestBlocksOnlineAccounts();
// Extract online accounts' timestamp signatures from block data
// Extract online accounts' timestamp signatures from block data. Only one signature if aggregated.
List<byte[]> onlineAccountsSignatures = BlockTransformer.decodeTimestampSignatures(this.blockData.getOnlineAccountsSignatures());
// We'll build up a list of online accounts to hand over to Controller if block is added to chain
// and this will become latestBlocksOnlineAccounts (above) to reduce CPU load when we process next block...
List<OnlineAccountData> ourOnlineAccounts = new ArrayList<>();
if (this.blockData.getTimestamp() >= BlockChain.getInstance().getAggregateSignatureTimestamp()) {
// Aggregate all public keys
Collection<byte[]> publicKeys = onlineRewardShares.stream()
.map(RewardShareData::getRewardSharePublicKey)
.collect(Collectors.toList());
for (int i = 0; i < onlineAccountsSignatures.size(); ++i) {
byte[] signature = onlineAccountsSignatures.get(i);
byte[] publicKey = onlineRewardShares.get(i).getRewardSharePublicKey();
byte[] aggregatePublicKey = Qortal25519Extras.aggregatePublicKeys(publicKeys);
OnlineAccountData onlineAccountData = new OnlineAccountData(onlineTimestamp, signature, publicKey);
ourOnlineAccounts.add(onlineAccountData);
byte[] aggregateSignature = onlineAccountsSignatures.get(0);
// If signature is still current then no need to perform Ed25519 verify
if (currentOnlineAccounts != null && currentOnlineAccounts.remove(onlineAccountData))
// remove() returned true, so online account still current
// and one less entry in currentOnlineAccounts to check next time
continue;
// If signature was okay in latest block then no need to perform Ed25519 verify
if (latestBlocksOnlineAccounts != null && latestBlocksOnlineAccounts.contains(onlineAccountData))
continue;
if (!Crypto.verify(publicKey, signature, onlineTimestampBytes))
// One-step verification of aggregate signature using aggregate public key
if (!Qortal25519Extras.verifyAggregated(aggregatePublicKey, aggregateSignature, onlineTimestampBytes))
return ValidationResult.ONLINE_ACCOUNT_SIGNATURE_INCORRECT;
} else {
// Build block's view of online accounts
Set<OnlineAccountData> onlineAccounts = new HashSet<>();
for (int i = 0; i < onlineAccountsSignatures.size(); ++i) {
byte[] signature = onlineAccountsSignatures.get(i);
byte[] publicKey = onlineRewardShares.get(i).getRewardSharePublicKey();
OnlineAccountData onlineAccountData = new OnlineAccountData(onlineTimestamp, signature, publicKey);
onlineAccounts.add(onlineAccountData);
}
// Remove those already validated & cached by online accounts manager - no need to re-validate them
OnlineAccountsManager.getInstance().removeKnown(onlineAccounts, onlineTimestamp);
// Validate the rest
for (OnlineAccountData onlineAccount : onlineAccounts)
if (!Crypto.verify(onlineAccount.getPublicKey(), onlineAccount.getSignature(), onlineTimestampBytes))
return ValidationResult.ONLINE_ACCOUNT_SIGNATURE_INCORRECT;
// We've validated these, so allow online accounts manager to cache
OnlineAccountsManager.getInstance().addBlocksOnlineAccounts(onlineAccounts, onlineTimestamp);
}
// All online accounts valid, so save our list of online accounts for potential later use
this.cachedValidOnlineAccounts = ourOnlineAccounts;
this.cachedOnlineRewardShares = onlineRewardShares;
return ValidationResult.OK;
@@ -1426,9 +1439,6 @@ public class Block {
postBlockTidy();
// Give Controller our cached, valid online accounts data (if any) to help reduce CPU load for next block
OnlineAccountsManager.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
}
@@ -1644,9 +1654,6 @@ public class Block {
this.blockData.setHeight(null);
postBlockTidy();
// Remove any cached, valid online accounts data from Controller
OnlineAccountsManager.getInstance().popLatestBlocksOnlineAccounts();
}
protected void orphanTransactionsFromBlock() throws DataException {
@@ -1907,7 +1914,7 @@ public class Block {
// Fetch list of legacy QORA holders who haven't reached their cap of QORT reward.
List<EligibleQoraHolderData> qoraHolders = this.repository.getAccountRepository().getEligibleLegacyQoraHolders(isProcessingNotOrphaning ? null : this.blockData.getHeight());
final boolean haveQoraHolders = !qoraHolders.isEmpty();
final long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShare();
final long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShareAtHeight(this.blockData.getHeight());
// Perform account-level-based reward scaling if appropriate
if (!haveFounders) {

View File

@@ -68,11 +68,12 @@ public class BlockChain {
atFindNextTransactionFix,
newBlockSigHeight,
shareBinFix,
rewardShareLimitTimestamp,
calcChainWeightTimestamp,
newConsensusTimestamp,
transactionV5Timestamp,
transactionV6Timestamp,
disableReferenceTimestamp
disableReferenceTimestamp,
aggregateSignatureTimestamp;
}
// Custom transaction fees
@@ -112,9 +113,13 @@ public class BlockChain {
/** Generated lookup of share-bin by account level */
private AccountLevelShareBin[] shareBinsByLevel;
/** Share of block reward/fees to legacy QORA coin holders */
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private Long qoraHoldersShare;
/** Share of block reward/fees to legacy QORA coin holders, by block height */
public static class ShareByHeight {
public int height;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
public long share;
}
private List<ShareByHeight> qoraHoldersShareByHeight;
/** How many legacy QORA per 1 QORT of block reward. */
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
@@ -157,7 +162,7 @@ public class BlockChain {
private int minAccountLevelToMint;
private int minAccountLevelForBlockSubmissions;
private int minAccountLevelToRewardShare;
private int maxRewardSharesPerMintingAccount;
private int maxRewardSharesPerFounderMintingAccount;
private int founderEffectiveMintingLevel;
/** Minimum time to retain online account signatures (ms) for block validity checks. */
@@ -165,6 +170,13 @@ public class BlockChain {
/** Maximum time to retain online account signatures (ms) for block validity checks, to allow for clock variance. */
private long onlineAccountSignaturesMaxLifetime;
/** Max reward shares by block height */
public static class MaxRewardSharesByTimestamp {
public long timestamp;
public int maxShares;
}
private List<MaxRewardSharesByTimestamp> maxRewardSharesByTimestamp;
/** Settings relating to CIYAM AT feature. */
public static class CiyamAtSettings {
/** Fee per step/op-code executed. */
@@ -346,10 +358,6 @@ public class BlockChain {
return this.cumulativeBlocksByLevel;
}
public long getQoraHoldersShare() {
return this.qoraHoldersShare;
}
public long getQoraPerQortReward() {
return this.qoraPerQortReward;
}
@@ -366,8 +374,8 @@ public class BlockChain {
return this.minAccountLevelToRewardShare;
}
public int getMaxRewardSharesPerMintingAccount() {
return this.maxRewardSharesPerMintingAccount;
public int getMaxRewardSharesPerFounderMintingAccount() {
return this.maxRewardSharesPerFounderMintingAccount;
}
public int getFounderEffectiveMintingLevel() {
@@ -400,12 +408,12 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.shareBinFix.name()).intValue();
}
public long getCalcChainWeightTimestamp() {
return this.featureTriggers.get(FeatureTrigger.calcChainWeightTimestamp.name()).longValue();
public long getRewardShareLimitTimestamp() {
return this.featureTriggers.get(FeatureTrigger.rewardShareLimitTimestamp.name()).longValue();
}
public long getNewConsensusTimestamp() {
return this.featureTriggers.get(FeatureTrigger.newConsensusTimestamp.name()).longValue();
public long getCalcChainWeightTimestamp() {
return this.featureTriggers.get(FeatureTrigger.calcChainWeightTimestamp.name()).longValue();
}
public long getTransactionV5Timestamp() {
@@ -420,6 +428,10 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.disableReferenceTimestamp.name()).longValue();
}
public long getAggregateSignatureTimestamp() {
return this.featureTriggers.get(FeatureTrigger.aggregateSignatureTimestamp.name()).longValue();
}
// More complex getters for aspects that change by height or timestamp
public long getRewardAtHeight(int ourHeight) {
@@ -448,6 +460,23 @@ public class BlockChain {
return this.getUnitFee();
}
public int getMaxRewardSharesAtTimestamp(long ourTimestamp) {
for (int i = maxRewardSharesByTimestamp.size() - 1; i >= 0; --i)
if (maxRewardSharesByTimestamp.get(i).timestamp <= ourTimestamp)
return maxRewardSharesByTimestamp.get(i).maxShares;
return 0;
}
public long getQoraHoldersShareAtHeight(int ourHeight) {
// Scan through for QORA share at our height
for (int i = qoraHoldersShareByHeight.size() - 1; i >= 0; --i)
if (qoraHoldersShareByHeight.get(i).height <= ourHeight)
return qoraHoldersShareByHeight.get(i).share;
return 0;
}
/** Validate blockchain config read from JSON */
private void validateConfig() {
if (this.genesisInfo == null)
@@ -459,8 +488,8 @@ public class BlockChain {
if (this.sharesByLevel == null)
Settings.throwValidationError("No \"sharesByLevel\" entry found in blockchain config");
if (this.qoraHoldersShare == null)
Settings.throwValidationError("No \"qoraHoldersShare\" entry found in blockchain config");
if (this.qoraHoldersShareByHeight == null)
Settings.throwValidationError("No \"qoraHoldersShareByHeight\" entry found in blockchain config");
if (this.qoraPerQortReward == null)
Settings.throwValidationError("No \"qoraPerQortReward\" entry found in blockchain config");
@@ -498,7 +527,7 @@ public class BlockChain {
Settings.throwValidationError(String.format("Missing feature trigger \"%s\" in blockchain config", featureTrigger.name()));
// Check block reward share bounds
long totalShare = this.qoraHoldersShare;
long totalShare = this.getQoraHoldersShareAtHeight(1);
// Add share percents for account-level-based rewards
for (AccountLevelShareBin accountLevelShareBin : this.sharesByLevel)
totalShare += accountLevelShareBin.share;
@@ -536,6 +565,7 @@ public class BlockChain {
this.blocksNeededByLevel = Collections.unmodifiableList(this.blocksNeededByLevel);
this.cumulativeBlocksByLevel = Collections.unmodifiableList(this.cumulativeBlocksByLevel);
this.blockTimingsByHeight = Collections.unmodifiableList(this.blockTimingsByHeight);
this.qoraHoldersShareByHeight = Collections.unmodifiableList(this.qoraHoldersShareByHeight);
}
/**

View File

@@ -15,6 +15,7 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.TimeoutException;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -40,6 +41,7 @@ public class AutoUpdate extends Thread {
public static final String JAR_FILENAME = "qortal.jar";
public static final String NEW_JAR_FILENAME = "new-" + JAR_FILENAME;
public static final String AGENTLIB_JVM_HOLDER_ARG = "-DQORTAL_agentlib=";
private static final Logger LOGGER = LogManager.getLogger(AutoUpdate.class);
private static final long CHECK_INTERVAL = 20 * 60 * 1000L; // ms
@@ -243,6 +245,11 @@ public class AutoUpdate extends Thread {
// JVM arguments
javaCmd.addAll(ManagementFactory.getRuntimeMXBean().getInputArguments());
// Disable, but retain, any -agentlib JVM arg as sub-process might fail if it tries to reuse same port
javaCmd = javaCmd.stream()
.map(arg -> arg.replace("-agentlib", AGENTLIB_JVM_HOLDER_ARG))
.collect(Collectors.toList());
// Remove JNI options as they won't be supported by command-line 'java'
// These are typically added by the AdvancedInstaller Java launcher EXE
javaCmd.removeAll(Arrays.asList("abort", "exit", "vfprintf"));
@@ -261,10 +268,19 @@ public class AutoUpdate extends Thread {
Translator.INSTANCE.translate("SysTray", "APPLYING_UPDATE_AND_RESTARTING"),
MessageType.INFO);
new ProcessBuilder(javaCmd).start();
ProcessBuilder processBuilder = new ProcessBuilder(javaCmd);
// New process will inherit our stdout and stderr
processBuilder.redirectOutput(ProcessBuilder.Redirect.INHERIT);
processBuilder.redirectError(ProcessBuilder.Redirect.INHERIT);
Process process = processBuilder.start();
// Nothing to pipe to new process, so close output stream (process's stdin)
process.getOutputStream().close();
return true; // applying update OK
} catch (IOException e) {
} catch (Exception e) {
LOGGER.error(String.format("Failed to apply update: %s", e.getMessage()));
try {

View File

@@ -65,9 +65,8 @@ public class BlockMinter extends Thread {
// Lite nodes do not mint
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
if (Settings.getInstance().getWipeUnconfirmedOnStart()) {
if (Settings.getInstance().getWipeUnconfirmedOnStart()) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Wipe existing unconfirmed transactions
List<TransactionData> unconfirmedTransactions = repository.getTransactionRepository().getUnconfirmedTransactions();
@@ -77,30 +76,31 @@ public class BlockMinter extends Thread {
}
repository.saveChanges();
} catch (DataException e) {
LOGGER.warn("Repository issue trying to wipe unconfirmed transactions on start-up: {}", e.getMessage());
// Fall-through to normal behaviour in case we can recover
}
}
// Going to need this a lot...
BlockRepository blockRepository = repository.getBlockRepository();
BlockData previousBlockData = null;
BlockData previousBlockData = null;
// Vars to keep track of blocks that were skipped due to chain weight
byte[] parentSignatureForLastLowWeightBlock = null;
Long timeOfLastLowWeightBlock = null;
// Vars to keep track of blocks that were skipped due to chain weight
byte[] parentSignatureForLastLowWeightBlock = null;
Long timeOfLastLowWeightBlock = null;
List<Block> newBlocks = new ArrayList<>();
List<Block> newBlocks = new ArrayList<>();
// Flags for tracking change in whether minting is possible,
// so we can notify Controller, and further update SysTray, etc.
boolean isMintingPossible = false;
boolean wasMintingPossible = isMintingPossible;
while (running) {
repository.discardChanges(); // Free repository locks, if any
// Flags for tracking change in whether minting is possible,
// so we can notify Controller, and further update SysTray, etc.
boolean isMintingPossible = false;
boolean wasMintingPossible = isMintingPossible;
while (running) {
if (isMintingPossible != wasMintingPossible)
Controller.getInstance().onMintingPossibleChange(isMintingPossible);
if (isMintingPossible != wasMintingPossible)
Controller.getInstance().onMintingPossibleChange(isMintingPossible);
wasMintingPossible = isMintingPossible;
wasMintingPossible = isMintingPossible;
try {
// Sleep for a while
Thread.sleep(1000);
@@ -114,319 +114,338 @@ public class BlockMinter extends Thread {
if (minLatestBlockTimestamp == null)
continue;
// No online accounts? (e.g. during startup)
if (OnlineAccountsManager.getInstance().getOnlineAccounts().isEmpty())
// No online accounts for current timestamp? (e.g. during startup)
if (!OnlineAccountsManager.getInstance().hasOnlineAccounts())
continue;
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
// No minting accounts?
if (mintingAccountsData.isEmpty())
continue;
try (final Repository repository = RepositoryManager.getRepository()) {
// Going to need this a lot...
BlockRepository blockRepository = repository.getBlockRepository();
// Disregard minting accounts that are no longer valid, e.g. by transfer/loss of founder flag or account level
// Note that minting accounts are actually reward-shares in Qortal
Iterator<MintingAccountData> madi = mintingAccountsData.iterator();
while (madi.hasNext()) {
MintingAccountData mintingAccountData = madi.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't exist - probably cancelled but not yet removed from node's list of minting accounts
madi.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
madi.remove();
continue;
}
// Optional (non-validated) prevention of block submissions below a defined level.
// This is an unvalidated version of Blockchain.minAccountLevelToMint
// and exists only to reduce block candidates by default.
int level = mintingAccount.getEffectiveMintingLevel();
if (level < BlockChain.getInstance().getMinAccountLevelForBlockSubmissions()) {
madi.remove();
continue;
}
}
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
// Disregard peers that don't have a recent block, but only if we're not in recovery mode.
// In that mode, we want to allow minting on top of older blocks, to recover stalled networks.
if (Synchronizer.getInstance().getRecoveryMode() == false)
peers.removeIf(Controller.hasNoRecentBlock);
// Don't mint if we don't have enough up-to-date peers as where would the transactions/consensus come from?
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
continue;
// If we are stuck on an invalid block, we should allow an alternative to be minted
boolean recoverInvalidBlock = false;
if (Synchronizer.getInstance().timeInvalidBlockLastReceived != null) {
// We've had at least one invalid block
long timeSinceLastValidBlock = NTP.getTime() - Synchronizer.getInstance().timeValidBlockLastReceived;
long timeSinceLastInvalidBlock = NTP.getTime() - Synchronizer.getInstance().timeInvalidBlockLastReceived;
if (timeSinceLastValidBlock > INVALID_BLOCK_RECOVERY_TIMEOUT) {
if (timeSinceLastInvalidBlock < INVALID_BLOCK_RECOVERY_TIMEOUT) {
// Last valid block was more than 10 mins ago, but we've had an invalid block since then
// Assume that the chain has stalled because there is no alternative valid candidate
// Enter recovery mode to allow alternative, valid candidates to be minted
recoverInvalidBlock = true;
}
}
}
// If our latest block isn't recent then we need to synchronize instead of minting, unless we're in recovery mode.
if (!peers.isEmpty() && lastBlockData.getTimestamp() < minLatestBlockTimestamp)
if (Synchronizer.getInstance().getRecoveryMode() == false && recoverInvalidBlock == false)
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
// No minting accounts?
if (mintingAccountsData.isEmpty())
continue;
// There are enough peers with a recent block and our latest block is recent
// so go ahead and mint a block if possible.
isMintingPossible = true;
// Disregard minting accounts that are no longer valid, e.g. by transfer/loss of founder flag or account level
// Note that minting accounts are actually reward-shares in Qortal
Iterator<MintingAccountData> madi = mintingAccountsData.iterator();
while (madi.hasNext()) {
MintingAccountData mintingAccountData = madi.next();
// Check blockchain hasn't changed
if (previousBlockData == null || !Arrays.equals(previousBlockData.getSignature(), lastBlockData.getSignature())) {
previousBlockData = lastBlockData;
newBlocks.clear();
// Reduce log timeout
logTimeout = 10 * 1000L;
// Last low weight block is no longer valid
parentSignatureForLastLowWeightBlock = null;
}
// Discard accounts we have already built blocks with
mintingAccountsData.removeIf(mintingAccountData -> newBlocks.stream().anyMatch(newBlock -> Arrays.equals(newBlock.getBlockData().getMinterPublicKey(), mintingAccountData.getPublicKey())));
// Do we need to build any potential new blocks?
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
final byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
final boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
}
if (parentSignatureForLastLowWeightBlock != null) {
// The last iteration found a higher weight block in the network, so sleep for a while
// to allow is to sync the higher weight chain. We are sleeping here rather than when
// detected as we don't want to hold the blockchain lock open.
LOGGER.debug("Sleeping for 10 seconds...");
Thread.sleep(10 * 1000L);
}
for (PrivateKeyAccount mintingAccount : newBlocksMintingAccounts) {
// First block does the AT heavy-lifting
if (newBlocks.isEmpty()) {
Block newBlock = Block.mint(repository, previousBlockData, mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't build a to-be-minted block"));
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't exist - probably cancelled but not yet removed from node's list of minting accounts
madi.remove();
continue;
}
newBlocks.add(newBlock);
} else {
// The blocks for other minters require less effort...
Block newBlock = newBlocks.get(0).remint(mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't rebuild a to-be-minted block"));
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
madi.remove();
continue;
}
newBlocks.add(newBlock);
// Optional (non-validated) prevention of block submissions below a defined level.
// This is an unvalidated version of Blockchain.minAccountLevelToMint
// and exists only to reduce block candidates by default.
int level = mintingAccount.getEffectiveMintingLevel();
if (level < BlockChain.getInstance().getMinAccountLevelForBlockSubmissions()) {
madi.remove();
continue;
}
}
}
// No potential block candidates?
if (newBlocks.isEmpty())
continue;
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Make sure we're the only thread modifying the blockchain
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(30, TimeUnit.SECONDS)) {
LOGGER.debug("Couldn't acquire blockchain lock even after waiting 30 seconds");
continue;
}
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
boolean newBlockMinted = false;
Block newBlock = null;
// Disregard peers that don't have a recent block, but only if we're not in recovery mode.
// In that mode, we want to allow minting on top of older blocks, to recover stalled networks.
if (Synchronizer.getInstance().getRecoveryMode() == false)
peers.removeIf(Controller.hasNoRecentBlock);
try {
// Clear repository session state so we have latest view of data
repository.discardChanges();
// Now that we have blockchain lock, do final check that chain hasn't changed
BlockData latestBlockData = blockRepository.getLastBlock();
if (!Arrays.equals(lastBlockData.getSignature(), latestBlockData.getSignature()))
// Don't mint if we don't have enough up-to-date peers as where would the transactions/consensus come from?
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
continue;
List<Block> goodBlocks = new ArrayList<>();
for (Block testBlock : newBlocks) {
// Is new block's timestamp valid yet?
// We do a separate check as some timestamp checks are skipped for testchains
if (testBlock.isTimestampValid() != ValidationResult.OK)
continue;
testBlock.preProcess();
// Is new block valid yet? (Before adding unconfirmed transactions)
ValidationResult result = testBlock.isValid();
if (result != ValidationResult.OK) {
moderatedLog(() -> LOGGER.error(String.format("To-be-minted block invalid '%s' before adding transactions?", result.name())));
continue;
}
goodBlocks.add(testBlock);
}
if (goodBlocks.isEmpty())
continue;
// Pick best block
final int parentHeight = previousBlockData.getHeight();
final byte[] parentBlockSignature = previousBlockData.getSignature();
BigInteger bestWeight = null;
for (int bi = 0; bi < goodBlocks.size(); ++bi) {
BlockData blockData = goodBlocks.get(bi).getBlockData();
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = Block.calcBlockWeight(parentHeight, parentBlockSignature, blockSummaryData);
if (bestWeight == null || blockWeight.compareTo(bestWeight) < 0) {
newBlock = goodBlocks.get(bi);
bestWeight = blockWeight;
}
}
try {
if (this.higherWeightChainExists(repository, bestWeight)) {
// Check if the base block has updated since the last time we were here
if (parentSignatureForLastLowWeightBlock == null || timeOfLastLowWeightBlock == null ||
!Arrays.equals(parentSignatureForLastLowWeightBlock, previousBlockData.getSignature())) {
// We've switched to a different chain, so reset the timer
timeOfLastLowWeightBlock = NTP.getTime();
// If we are stuck on an invalid block, we should allow an alternative to be minted
boolean recoverInvalidBlock = false;
if (Synchronizer.getInstance().timeInvalidBlockLastReceived != null) {
// We've had at least one invalid block
long timeSinceLastValidBlock = NTP.getTime() - Synchronizer.getInstance().timeValidBlockLastReceived;
long timeSinceLastInvalidBlock = NTP.getTime() - Synchronizer.getInstance().timeInvalidBlockLastReceived;
if (timeSinceLastValidBlock > INVALID_BLOCK_RECOVERY_TIMEOUT) {
if (timeSinceLastInvalidBlock < INVALID_BLOCK_RECOVERY_TIMEOUT) {
// Last valid block was more than 10 mins ago, but we've had an invalid block since then
// Assume that the chain has stalled because there is no alternative valid candidate
// Enter recovery mode to allow alternative, valid candidates to be minted
recoverInvalidBlock = true;
}
parentSignatureForLastLowWeightBlock = previousBlockData.getSignature();
}
}
// If less than 30 seconds has passed since first detection the higher weight chain,
// we should skip our block submission to give us the opportunity to sync to the better chain
if (NTP.getTime() - timeOfLastLowWeightBlock < 30*1000L) {
LOGGER.debug("Higher weight chain found in peers, so not signing a block this round");
LOGGER.debug("Time since detected: {}ms", NTP.getTime() - timeOfLastLowWeightBlock);
// If our latest block isn't recent then we need to synchronize instead of minting, unless we're in recovery mode.
if (!peers.isEmpty() && lastBlockData.getTimestamp() < minLatestBlockTimestamp)
if (Synchronizer.getInstance().getRecoveryMode() == false && recoverInvalidBlock == false)
continue;
// There are enough peers with a recent block and our latest block is recent
// so go ahead and mint a block if possible.
isMintingPossible = true;
// Reattach newBlocks to new repository handle
for (Block newBlock : newBlocks)
newBlock.setRepository(repository);
// Check blockchain hasn't changed
if (previousBlockData == null || !Arrays.equals(previousBlockData.getSignature(), lastBlockData.getSignature())) {
previousBlockData = lastBlockData;
newBlocks.clear();
// Reduce log timeout
logTimeout = 10 * 1000L;
// Last low weight block is no longer valid
parentSignatureForLastLowWeightBlock = null;
}
// Discard accounts we have already built blocks with
mintingAccountsData.removeIf(mintingAccountData -> newBlocks.stream().anyMatch(newBlock -> Arrays.equals(newBlock.getBlockData().getMinterPublicKey(), mintingAccountData.getPublicKey())));
// Do we need to build any potential new blocks?
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
}
if (parentSignatureForLastLowWeightBlock != null) {
// The last iteration found a higher weight block in the network, so sleep for a while
// to allow is to sync the higher weight chain. We are sleeping here rather than when
// detected as we don't want to hold the blockchain lock open.
LOGGER.info("Sleeping for 10 seconds...");
Thread.sleep(10 * 1000L);
}
for (PrivateKeyAccount mintingAccount : newBlocksMintingAccounts) {
// First block does the AT heavy-lifting
if (newBlocks.isEmpty()) {
Block newBlock = Block.mint(repository, previousBlockData, mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't build a to-be-minted block"));
continue;
}
else {
// More than 30 seconds have passed, so we should submit our block candidate anyway.
LOGGER.debug("More than 30 seconds passed, so proceeding to submit block candidate...");
newBlocks.add(newBlock);
} else {
// The blocks for other minters require less effort...
Block newBlock = newBlocks.get(0).remint(mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't rebuild a to-be-minted block"));
continue;
}
newBlocks.add(newBlock);
}
else {
LOGGER.debug("No higher weight chain found in peers");
}
} catch (DataException e) {
LOGGER.debug("Unable to check for a higher weight chain. Proceeding anyway...");
}
// Discard any uncommitted changes as a result of the higher weight chain detection
repository.discardChanges();
// No potential block candidates?
if (newBlocks.isEmpty())
continue;
// Clear variables that track low weight blocks
parentSignatureForLastLowWeightBlock = null;
timeOfLastLowWeightBlock = null;
// Add unconfirmed transactions
addUnconfirmedTransactions(repository, newBlock);
// Sign to create block's signature
newBlock.sign();
// Is newBlock still valid?
ValidationResult validationResult = newBlock.isValid();
if (validationResult != ValidationResult.OK) {
// No longer valid? Report and discard
LOGGER.error(String.format("To-be-minted block now invalid '%s' after adding unconfirmed transactions?", validationResult.name()));
// Rebuild block candidates, just to be sure
newBlocks.clear();
// Make sure we're the only thread modifying the blockchain
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(30, TimeUnit.SECONDS)) {
LOGGER.debug("Couldn't acquire blockchain lock even after waiting 30 seconds");
continue;
}
// Add to blockchain - something else will notice and broadcast new block to network
boolean newBlockMinted = false;
Block newBlock = null;
try {
newBlock.process();
// Clear repository session state so we have latest view of data
repository.discardChanges();
repository.saveChanges();
// Now that we have blockchain lock, do final check that chain hasn't changed
BlockData latestBlockData = blockRepository.getLastBlock();
if (!Arrays.equals(lastBlockData.getSignature(), latestBlockData.getSignature()))
continue;
LOGGER.info(String.format("Minted new block: %d", newBlock.getBlockData().getHeight()));
List<Block> goodBlocks = new ArrayList<>();
boolean wasInvalidBlockDiscarded = false;
Iterator<Block> newBlocksIterator = newBlocks.iterator();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(newBlock.getBlockData().getMinterPublicKey());
while (newBlocksIterator.hasNext()) {
Block testBlock = newBlocksIterator.next();
if (rewardShareData != null) {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s on behalf of %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
rewardShareData.getMinter(),
rewardShareData.getRecipient()));
} else {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
newBlock.getMinter().getAddress()));
// Is new block's timestamp valid yet?
// We do a separate check as some timestamp checks are skipped for testchains
if (testBlock.isTimestampValid() != ValidationResult.OK)
continue;
testBlock.preProcess();
// Is new block valid yet? (Before adding unconfirmed transactions)
ValidationResult result = testBlock.isValid();
if (result != ValidationResult.OK) {
moderatedLog(() -> LOGGER.error(String.format("To-be-minted block invalid '%s' before adding transactions?", result.name())));
newBlocksIterator.remove();
wasInvalidBlockDiscarded = true;
/*
* Bail out fast so that we loop around from the top again.
* This gives BlockMinter the possibility to remint this candidate block using another block from newBlocks,
* via the Blocks.remint() method, which avoids having to re-process Block ATs all over again.
* Particularly useful if some aspect of Blocks changes due a timestamp-based feature-trigger (see BlockChain class).
*/
break;
}
goodBlocks.add(testBlock);
}
// Notify network after we're released blockchain lock
newBlockMinted = true;
if (wasInvalidBlockDiscarded || goodBlocks.isEmpty())
continue;
// Notify Controller
repository.discardChanges(); // clear transaction status to prevent deadlocks
Controller.getInstance().onNewBlock(newBlock.getBlockData());
} catch (DataException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
// Pick best block
final int parentHeight = previousBlockData.getHeight();
final byte[] parentBlockSignature = previousBlockData.getSignature();
BigInteger bestWeight = null;
for (int bi = 0; bi < goodBlocks.size(); ++bi) {
BlockData blockData = goodBlocks.get(bi).getBlockData();
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = Block.calcBlockWeight(parentHeight, parentBlockSignature, blockSummaryData);
if (bestWeight == null || blockWeight.compareTo(bestWeight) < 0) {
newBlock = goodBlocks.get(bi);
bestWeight = blockWeight;
}
}
try {
if (this.higherWeightChainExists(repository, bestWeight)) {
// Check if the base block has updated since the last time we were here
if (parentSignatureForLastLowWeightBlock == null || timeOfLastLowWeightBlock == null ||
!Arrays.equals(parentSignatureForLastLowWeightBlock, previousBlockData.getSignature())) {
// We've switched to a different chain, so reset the timer
timeOfLastLowWeightBlock = NTP.getTime();
}
parentSignatureForLastLowWeightBlock = previousBlockData.getSignature();
// If less than 30 seconds has passed since first detection the higher weight chain,
// we should skip our block submission to give us the opportunity to sync to the better chain
if (NTP.getTime() - timeOfLastLowWeightBlock < 30 * 1000L) {
LOGGER.info("Higher weight chain found in peers, so not signing a block this round");
LOGGER.info("Time since detected: {}", NTP.getTime() - timeOfLastLowWeightBlock);
continue;
} else {
// More than 30 seconds have passed, so we should submit our block candidate anyway.
LOGGER.info("More than 30 seconds passed, so proceeding to submit block candidate...");
}
} else {
LOGGER.debug("No higher weight chain found in peers");
}
} catch (DataException e) {
LOGGER.debug("Unable to check for a higher weight chain. Proceeding anyway...");
}
// Discard any uncommitted changes as a result of the higher weight chain detection
repository.discardChanges();
// Clear variables that track low weight blocks
parentSignatureForLastLowWeightBlock = null;
timeOfLastLowWeightBlock = null;
// Add unconfirmed transactions
addUnconfirmedTransactions(repository, newBlock);
// Sign to create block's signature
newBlock.sign();
// Is newBlock still valid?
ValidationResult validationResult = newBlock.isValid();
if (validationResult != ValidationResult.OK) {
// No longer valid? Report and discard
LOGGER.error(String.format("To-be-minted block now invalid '%s' after adding unconfirmed transactions?", validationResult.name()));
// Rebuild block candidates, just to be sure
newBlocks.clear();
continue;
}
// Add to blockchain - something else will notice and broadcast new block to network
try {
newBlock.process();
repository.saveChanges();
LOGGER.info(String.format("Minted new block: %d", newBlock.getBlockData().getHeight()));
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(newBlock.getBlockData().getMinterPublicKey());
if (rewardShareData != null) {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s on behalf of %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
rewardShareData.getMinter(),
rewardShareData.getRecipient()));
} else {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
newBlock.getMinter().getAddress()));
}
// Notify network after we're released blockchain lock
newBlockMinted = true;
// Notify Controller
repository.discardChanges(); // clear transaction status to prevent deadlocks
Controller.getInstance().onNewBlock(newBlock.getBlockData());
} catch (DataException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
}
} finally {
blockchainLock.unlock();
}
} finally {
blockchainLock.unlock();
}
if (newBlockMinted) {
// Broadcast our new chain to network
BlockData newBlockData = newBlock.getBlockData();
if (newBlockMinted) {
// Broadcast our new chain to network
BlockData newBlockData = newBlock.getBlockData();
Network network = Network.getInstance();
network.broadcast(broadcastPeer -> network.buildHeightMessage(broadcastPeer, newBlockData));
Network network = Network.getInstance();
network.broadcast(broadcastPeer -> network.buildHeightMessage(broadcastPeer, newBlockData));
}
} catch (DataException e) {
LOGGER.warn("Repository issue while running block minter", e);
}
} catch (InterruptedException e) {
// We've been interrupted - time to exit
return;
}
} catch (DataException e) {
LOGGER.warn("Repository issue while running block minter", e);
} catch (InterruptedException e) {
// We've been interrupted - time to exit
return;
}
}
@@ -557,18 +576,23 @@ public class BlockMinter extends Thread {
// This peer has common block data
CommonBlockData commonBlockData = peer.getCommonBlockData();
BlockSummaryData commonBlockSummaryData = commonBlockData.getCommonBlockSummary();
if (commonBlockData.getChainWeight() != null) {
if (commonBlockData.getChainWeight() != null && peer.getCommonBlockData().getBlockSummariesAfterCommonBlock() != null) {
// The synchronizer has calculated this peer's chain weight
BigInteger ourChainWeightSinceCommonBlock = this.getOurChainWeightSinceBlock(repository, commonBlockSummaryData, commonBlockData.getBlockSummariesAfterCommonBlock());
BigInteger ourChainWeight = ourChainWeightSinceCommonBlock.add(blockCandidateWeight);
BigInteger peerChainWeight = commonBlockData.getChainWeight();
if (peerChainWeight.compareTo(ourChainWeight) >= 0) {
// This peer has a higher weight chain than ours
LOGGER.debug("Peer {} is on a higher weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
return true;
if (!Synchronizer.getInstance().containsInvalidBlockSummary(peer.getCommonBlockData().getBlockSummariesAfterCommonBlock())) {
// .. and it doesn't hold any invalid blocks
BigInteger ourChainWeightSinceCommonBlock = this.getOurChainWeightSinceBlock(repository, commonBlockSummaryData, commonBlockData.getBlockSummariesAfterCommonBlock());
BigInteger ourChainWeight = ourChainWeightSinceCommonBlock.add(blockCandidateWeight);
BigInteger peerChainWeight = commonBlockData.getChainWeight();
if (peerChainWeight.compareTo(ourChainWeight) >= 0) {
// This peer has a higher weight chain than ours
LOGGER.info("Peer {} is on a higher weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
return true;
} else {
LOGGER.debug("Peer {} is on a lower weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
}
} else {
LOGGER.debug("Peer {} is on a lower weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
LOGGER.debug("Peer {} has an invalid block", peer);
}
} else {
LOGGER.debug("Peer {} has no chain weight", peer);

View File

@@ -113,6 +113,7 @@ public class Controller extends Thread {
private long repositoryBackupTimestamp = startTime; // ms
private long repositoryMaintenanceTimestamp = startTime; // ms
private long repositoryCheckpointTimestamp = startTime; // ms
private long prunePeersTimestamp = startTime; // ms
private long ntpCheckTimestamp = startTime; // ms
private long deleteExpiredTimestamp = startTime + DELETE_EXPIRED_INTERVAL; // ms
@@ -552,6 +553,7 @@ public class Controller extends Thread {
final long repositoryBackupInterval = Settings.getInstance().getRepositoryBackupInterval();
final long repositoryCheckpointInterval = Settings.getInstance().getRepositoryCheckpointInterval();
long repositoryMaintenanceInterval = getRandomRepositoryMaintenanceInterval();
final long prunePeersInterval = 5 * 60 * 1000L; // Every 5 minutes
// Start executor service for trimming or pruning
PruneManager.getInstance().start();
@@ -649,10 +651,15 @@ public class Controller extends Thread {
}
// Prune stuck/slow/old peers
try {
Network.getInstance().prunePeers();
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue when trying to prune peers: %s", e.getMessage()));
if (now >= prunePeersTimestamp + prunePeersInterval) {
prunePeersTimestamp = now + prunePeersInterval;
try {
LOGGER.debug("Pruning peers...");
Network.getInstance().prunePeers();
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue when trying to prune peers: %s", e.getMessage()));
}
}
// Delete expired transactions
@@ -715,24 +722,6 @@ public class Controller extends Thread {
return lastMisbehaved != null && lastMisbehaved > NTP.getTime() - MISBEHAVIOUR_COOLOFF;
};
/** True if peer has unknown height or lower height. */
public static final Predicate<Peer> hasShorterBlockchain = peer -> {
BlockData ourLatestBlockData = getInstance().getChainTip();
int ourHeight = ourLatestBlockData.getHeight();
final PeerChainTipData peerChainTipData = peer.getChainTipData();
// Ensure we have chain tip data for this peer
if (peerChainTipData == null)
return true;
// Remove if peer is at a lower height than us
Integer peerHeight = peerChainTipData.getLastHeight();
if (peerHeight == null || peerHeight < ourHeight)
return true;
return false;
};
public static final Predicate<Peer> hasNoRecentBlock = peer -> {
final Long minLatestBlockTimestamp = getMinimumLatestBlockTimestamp();
final PeerChainTipData peerChainTipData = peer.getChainTipData();
@@ -805,23 +794,24 @@ public class Controller extends Thread {
String actionText;
// Use a more tolerant latest block timestamp in the isUpToDate() calls below to reduce misleading statuses.
// Any block in the last 30 minutes is considered "up to date" for the purposes of displaying statuses.
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
// Any block in the last 2 hours is considered "up to date" for the purposes of displaying statuses.
// This also aligns with the time interval required for continued online account submission.
final Long minLatestBlockTimestamp = NTP.getTime() - (2 * 60 * 60 * 1000L);
// Only show sync percent if it's less than 100, to avoid confusion
final Integer syncPercent = Synchronizer.getInstance().getSyncPercent();
final boolean isSyncing = (syncPercent != null && syncPercent < 100);
synchronized (Synchronizer.getInstance().syncLock) {
if (Settings.getInstance().isLite()) {
actionText = Translator.INSTANCE.translate("SysTray", "LITE_NODE");
SysTray.getInstance().setTrayIcon(4);
}
else if (this.isMintingPossible) {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_ENABLED");
SysTray.getInstance().setTrayIcon(2);
}
else if (numberOfPeers < Settings.getInstance().getMinBlockchainPeers()) {
actionText = Translator.INSTANCE.translate("SysTray", "CONNECTING");
SysTray.getInstance().setTrayIcon(3);
}
else if (!this.isUpToDate(minLatestBlockTimestamp) && Synchronizer.getInstance().isSynchronizing()) {
else if (!this.isUpToDate(minLatestBlockTimestamp) && isSyncing) {
actionText = String.format("%s - %d%%", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"), Synchronizer.getInstance().getSyncPercent());
SysTray.getInstance().setTrayIcon(3);
}
@@ -829,6 +819,10 @@ public class Controller extends Thread {
actionText = String.format("%s", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"));
SysTray.getInstance().setTrayIcon(3);
}
else if (OnlineAccountsManager.getInstance().hasActiveOnlineAccountSignatures()) {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_ENABLED");
SysTray.getInstance().setTrayIcon(2);
}
else {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_DISABLED");
SysTray.getInstance().setTrayIcon(4);
@@ -1247,6 +1241,10 @@ public class Controller extends Thread {
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsV2Message(peer, message);
break;
case GET_ONLINE_ACCOUNTS_V3:
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsV3Message(peer, message);
break;
case GET_ARBITRARY_DATA:
// Not currently supported
break;

View File

@@ -81,7 +81,7 @@ public class Synchronizer extends Thread {
private boolean syncRequestPending = false;
// Keep track of invalid blocks so that we don't keep trying to sync them
private Map<String, Long> invalidBlockSignatures = Collections.synchronizedMap(new HashMap<>());
private Map<ByteArray, Long> invalidBlockSignatures = Collections.synchronizedMap(new HashMap<>());
public Long timeValidBlockLastReceived = null;
public Long timeInvalidBlockLastReceived = null;
@@ -171,8 +171,8 @@ public class Synchronizer extends Thread {
public Integer getSyncPercent() {
synchronized (this.syncLock) {
// Report as 100% synced if the latest block is within the last 30 mins
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
// Report as 100% synced if the latest block is within the last 60 mins
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (Controller.getInstance().isUpToDate(minLatestBlockTimestamp)) {
return 100;
}
@@ -199,8 +199,6 @@ public class Synchronizer extends Thread {
if (this.isSynchronizing)
return true;
boolean isNewConsensusActive = NTP.getTime() >= BlockChain.getInstance().getNewConsensusTimestamp();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
@@ -229,10 +227,6 @@ public class Synchronizer extends Thread {
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
return true;
if (isNewConsensusActive)
// Disregard peers with a shorter chain
peers.removeIf(Controller.hasShorterBlockchain);
// Disregard peers that have no block signature or the same block signature as us
peers.removeIf(Controller.hasNoOrSameBlock);
@@ -241,13 +235,11 @@ public class Synchronizer extends Thread {
final int peersBeforeComparison = peers.size();
if (!isNewConsensusActive) {
// Request recent block summaries from the remaining peers, and locate our common block with each
Synchronizer.getInstance().findCommonBlocksWithPeers(peers);
// Request recent block summaries from the remaining peers, and locate our common block with each
Synchronizer.getInstance().findCommonBlocksWithPeers(peers);
// Compare the peers against each other, and against our chain, which will return an updated list excluding those without common blocks
peers = Synchronizer.getInstance().comparePeers(peers);
}
// Compare the peers against each other, and against our chain, which will return an updated list excluding those without common blocks
peers = Synchronizer.getInstance().comparePeers(peers);
// We may have added more inferior chain tips when comparing peers, so remove any peers that are currently on those chains
peers.removeIf(Controller.hasInferiorChainTip);
@@ -625,7 +617,7 @@ public class Synchronizer extends Thread {
// We have already determined that the correct chain diverged from a lower height. We are safe to skip these peers.
for (Peer peer : peersSharingCommonBlock) {
LOGGER.debug(String.format("Peer %s has common block at height %d but the superior chain is at height %d. Removing it from this round.", peer, commonBlockSummary.getHeight(), dropPeersAfterCommonBlockHeight));
this.addInferiorChainSignature(peer.getChainTipData().getLastBlockSignature());
//this.addInferiorChainSignature(peer.getChainTipData().getLastBlockSignature());
}
continue;
}
@@ -636,7 +628,9 @@ public class Synchronizer extends Thread {
int minChainLength = this.calculateMinChainLengthOfPeers(peersSharingCommonBlock, commonBlockSummary);
// Fetch block summaries from each peer
for (Peer peer : peersSharingCommonBlock) {
Iterator peersSharingCommonBlockIterator = peersSharingCommonBlock.iterator();
while (peersSharingCommonBlockIterator.hasNext()) {
Peer peer = (Peer) peersSharingCommonBlockIterator.next();
// If we're shutting down, just return the latest peer list
if (Controller.isStopping())
@@ -693,6 +687,8 @@ public class Synchronizer extends Thread {
if (this.containsInvalidBlockSummary(peer.getCommonBlockData().getBlockSummariesAfterCommonBlock())) {
LOGGER.debug("Ignoring peer %s because it holds an invalid block", peer);
peers.remove(peer);
peersSharingCommonBlockIterator.remove();
continue;
}
// Reduce minChainLength if needed. If we don't have any blocks, this peer will be excluded from chain weight comparisons later in the process, so we shouldn't update minChainLength
@@ -848,6 +844,10 @@ public class Synchronizer extends Thread {
/* Invalid block signature tracking */
public Map<ByteArray, Long> getInvalidBlockSignatures() {
return this.invalidBlockSignatures;
}
private void addInvalidBlockSignature(byte[] signature) {
Long now = NTP.getTime();
if (now == null) {
@@ -855,8 +855,7 @@ public class Synchronizer extends Thread {
}
// Add or update existing entry
String sig58 = Base58.encode(signature);
invalidBlockSignatures.put(sig58, now);
invalidBlockSignatures.put(ByteArray.wrap(signature), now);
}
private void deleteOlderInvalidSignatures(Long now) {
if (now == null) {
@@ -875,17 +874,16 @@ public class Synchronizer extends Thread {
}
}
}
private boolean containsInvalidBlockSummary(List<BlockSummaryData> blockSummaries) {
public boolean containsInvalidBlockSummary(List<BlockSummaryData> blockSummaries) {
if (blockSummaries == null || invalidBlockSignatures == null) {
return false;
}
// Loop through our known invalid blocks and check each one against supplied block summaries
for (String invalidSignature58 : invalidBlockSignatures.keySet()) {
byte[] invalidSignature = Base58.decode(invalidSignature58);
for (ByteArray invalidSignature : invalidBlockSignatures.keySet()) {
for (BlockSummaryData blockSummary : blockSummaries) {
byte[] signature = blockSummary.getSignature();
if (Arrays.equals(signature, invalidSignature)) {
if (Arrays.equals(signature, invalidSignature.value)) {
return true;
}
}
@@ -898,10 +896,9 @@ public class Synchronizer extends Thread {
}
// Loop through our known invalid blocks and check each one against supplied block signatures
for (String invalidSignature58 : invalidBlockSignatures.keySet()) {
byte[] invalidSignature = Base58.decode(invalidSignature58);
for (ByteArray invalidSignature : invalidBlockSignatures.keySet()) {
for (byte[] signature : blockSignatures) {
if (Arrays.equals(signature, invalidSignature)) {
if (Arrays.equals(signature, invalidSignature.value)) {
return true;
}
}
@@ -993,13 +990,8 @@ public class Synchronizer extends Thread {
return SynchronizationResult.NOTHING_TO_DO;
}
boolean isNewConsensusActive = NTP.getTime() >= BlockChain.getInstance().getNewConsensusTimestamp();
// Unless we're doing a forced sync, we might need to compare blocks after common block
boolean isBlockComparisonNeeded = isNewConsensusActive
? ourInitialHeight == peerHeight
: ourInitialHeight > commonBlockHeight;
if (!force && isBlockComparisonNeeded) {
if (!force && ourInitialHeight > commonBlockHeight) {
SynchronizationResult chainCompareResult = compareChains(repository, commonBlockData, ourLatestBlockData, peer, peerHeight, peerBlockSummaries);
if (chainCompareResult != SynchronizationResult.OK)
return chainCompareResult;
@@ -1200,56 +1192,27 @@ public class Synchronizer extends Thread {
peerBlockSummaries.addAll(moreBlockSummaries);
}
boolean isNewConsensusActive = NTP.getTime() >= BlockChain.getInstance().getNewConsensusTimestamp();
if (isNewConsensusActive) {
int parentHeight = ourLatestBlockData.getHeight() - 1;
// Fetch our corresponding block summaries
List<BlockSummaryData> ourBlockSummaries = repository.getBlockRepository().getBlockSummaries(commonBlockHeight + 1, ourLatestBlockData.getHeight());
BlockSummaryData ourLatestBlockSummary = new BlockSummaryData(ourLatestBlockData);
byte[] ourParentBlockSignature = ourLatestBlockData.getReference();
// Populate minter account levels for both lists of block summaries
populateBlockSummariesMinterLevels(repository, ourBlockSummaries);
populateBlockSummariesMinterLevels(repository, peerBlockSummaries);
BlockSummaryData peersLatestBlockSummary = peerBlockSummaries.get(peerBlockSummaries.size() - 1);
byte[] peersParentBlockSignature = peerBlockSummaries.size() > 1
? peerBlockSummaries.get(peerBlockSummaries.size() - 1 - 1).getSignature()
: commonBlockSig;
final int mutualHeight = commonBlockHeight + Math.min(ourBlockSummaries.size(), peerBlockSummaries.size());
// Populate minter account levels for both lists of block summaries
populateBlockSummariesMinterLevels(repository, Collections.singletonList(ourLatestBlockSummary));
populateBlockSummariesMinterLevels(repository, Collections.singletonList(peersLatestBlockSummary));
// Calculate cumulative chain weights of both blockchain subsets, from common block to highest mutual block.
BigInteger ourChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockSig, ourBlockSummaries, mutualHeight);
BigInteger peerChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockSig, peerBlockSummaries, mutualHeight);
BigInteger ourChainWeight = Block.calcBlockWeight(parentHeight, ourParentBlockSignature, ourLatestBlockSummary);
BigInteger peerChainWeight = Block.calcBlockWeight(parentHeight, peersParentBlockSignature, peersLatestBlockSummary);
NumberFormat accurateFormatter = new DecimalFormat("0.################E0");
LOGGER.debug(String.format("commonBlockHeight: %d, commonBlockSig: %.8s, ourBlockSummaries.size(): %d, peerBlockSummaries.size(): %d", commonBlockHeight, Base58.encode(commonBlockSig), ourBlockSummaries.size(), peerBlockSummaries.size()));
LOGGER.debug(String.format("Our chain weight: %s, peer's chain weight: %s (higher is better)", accurateFormatter.format(ourChainWeight), accurateFormatter.format(peerChainWeight)));
NumberFormat accurateFormatter = new DecimalFormat("0.################E0");
LOGGER.debug(String.format("Our chain weight: %s, peer's chain weight: %s (higher is better)", accurateFormatter.format(ourChainWeight), accurateFormatter.format(peerChainWeight)));
// If our blockchain has greater weight then don't synchronize with peer
if (ourChainWeight.compareTo(peerChainWeight) >= 0) {
LOGGER.debug(String.format("Not synchronizing with peer %s as we have better blockchain", peer));
return SynchronizationResult.INFERIOR_CHAIN;
}
} else {
// Fetch our corresponding block summaries
List<BlockSummaryData> ourBlockSummaries = repository.getBlockRepository().getBlockSummaries(commonBlockHeight + 1, ourLatestBlockData.getHeight());
// Populate minter account levels for both lists of block summaries
populateBlockSummariesMinterLevels(repository, ourBlockSummaries);
populateBlockSummariesMinterLevels(repository, peerBlockSummaries);
final int mutualHeight = commonBlockHeight + Math.min(ourBlockSummaries.size(), peerBlockSummaries.size());
// Calculate cumulative chain weights of both blockchain subsets, from common block to highest mutual block.
BigInteger ourChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockSig, ourBlockSummaries, mutualHeight);
BigInteger peerChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockSig, peerBlockSummaries, mutualHeight);
NumberFormat accurateFormatter = new DecimalFormat("0.################E0");
LOGGER.debug(String.format("commonBlockHeight: %d, commonBlockSig: %.8s, ourBlockSummaries.size(): %d, peerBlockSummaries.size(): %d", commonBlockHeight, Base58.encode(commonBlockSig), ourBlockSummaries.size(), peerBlockSummaries.size()));
LOGGER.debug(String.format("Our chain weight: %s, peer's chain weight: %s (higher is better)", accurateFormatter.format(ourChainWeight), accurateFormatter.format(peerChainWeight)));
// If our blockchain has greater weight then don't synchronize with peer
if (ourChainWeight.compareTo(peerChainWeight) >= 0) {
LOGGER.debug(String.format("Not synchronizing with peer %s as we have better blockchain", peer));
return SynchronizationResult.INFERIOR_CHAIN;
}
// If our blockchain has greater weight then don't synchronize with peer
if (ourChainWeight.compareTo(peerChainWeight) >= 0) {
LOGGER.debug(String.format("Not synchronizing with peer %s as we have better blockchain", peer));
return SynchronizationResult.INFERIOR_CHAIN;
}
}

View File

@@ -67,6 +67,9 @@ public class ArbitraryDataFileListManager {
/** Maximum number of hops that a file list relay request is allowed to make */
public static int RELAY_REQUEST_MAX_HOPS = 4;
/** Minimum peer version to use relay */
public static String RELAY_MIN_PEER_VERSION = "3.4.0";
private ArbitraryDataFileListManager() {
}
@@ -524,6 +527,7 @@ public class ArbitraryDataFileListManager {
forwardArbitraryDataFileListMessage = new ArbitraryDataFileListMessage(signature, hashes, requestTime, requestHops,
arbitraryDataFileListMessage.getPeerAddress(), arbitraryDataFileListMessage.isRelayPossible());
}
forwardArbitraryDataFileListMessage.setId(message.getId());
// Forward to requesting peer
LOGGER.debug("Forwarding file list with {} hashes to requesting peer: {}", hashes.size(), requestingPeer);
@@ -694,9 +698,10 @@ public class ArbitraryDataFileListManager {
LOGGER.debug("Rebroadcasting hash list request from peer {} for signature {} to our other peers... totalRequestTime: {}, requestHops: {}", peer, Base58.encode(signature), totalRequestTime, requestHops);
Network.getInstance().broadcast(
broadcastPeer -> broadcastPeer == peer ||
Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost())
? null : relayGetArbitraryDataFileListMessage);
broadcastPeer ->
!broadcastPeer.isAtLeastVersion(RELAY_MIN_PEER_VERSION) ? null :
broadcastPeer == peer || Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost()) ? null : relayGetArbitraryDataFileListMessage
);
}
else {

View File

@@ -22,8 +22,7 @@ import org.qortal.utils.Triple;
import java.io.IOException;
import java.util.*;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.RELAY_REQUEST_MAX_DURATION;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.RELAY_REQUEST_MAX_HOPS;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.*;
public class ArbitraryMetadataManager {
@@ -435,12 +434,13 @@ public class ArbitraryMetadataManager {
// Relay request hasn't reached the maximum number of hops yet, so can be rebroadcast
Message relayGetArbitraryMetadataMessage = new GetArbitraryMetadataMessage(signature, requestTime, requestHops);
relayGetArbitraryMetadataMessage.setId(message.getId());
LOGGER.debug("Rebroadcasting metadata request from peer {} for signature {} to our other peers... totalRequestTime: {}, requestHops: {}", peer, Base58.encode(signature), totalRequestTime, requestHops);
Network.getInstance().broadcast(
broadcastPeer -> broadcastPeer == peer ||
Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost())
? null : relayGetArbitraryMetadataMessage);
broadcastPeer ->
!broadcastPeer.isAtLeastVersion(RELAY_MIN_PEER_VERSION) ? null :
broadcastPeer == peer || Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost()) ? null : relayGetArbitraryMetadataMessage);
}
else {

View File

@@ -242,8 +242,8 @@ public class TradeBot implements Listener {
if (!(event instanceof Synchronizer.NewChainTipEvent))
return;
// Don't process trade bots or broadcast presence timestamps if our chain is more than 30 minutes old
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
// Don't process trade bots or broadcast presence timestamps if our chain is more than 60 minutes old
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
return;
@@ -292,7 +292,7 @@ public class TradeBot implements Listener {
}
public static byte[] deriveTradeNativePublicKey(byte[] privateKey) {
return PrivateKeyAccount.toPublicKey(privateKey);
return Crypto.toPublicKey(privateKey);
}
public static byte[] deriveTradeForeignPublicKey(byte[] privateKey) {

View File

@@ -375,7 +375,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
public Long getWalletBalanceFromTransactions(String key58) throws ForeignBlockchainException {
long balance = 0;
Comparator<SimpleTransaction> oldestTimestampFirstComparator = Comparator.comparingInt(SimpleTransaction::getTimestamp);
Comparator<SimpleTransaction> oldestTimestampFirstComparator = Comparator.comparingLong(SimpleTransaction::getTimestamp);
List<SimpleTransaction> transactions = getWalletTransactions(key58).stream().sorted(oldestTimestampFirstComparator).collect(Collectors.toList());
for (SimpleTransaction transaction : transactions) {
balance += transaction.getTotalAmount();
@@ -455,7 +455,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// Process new keys
} while (true);
Comparator<SimpleTransaction> newestTimestampFirstComparator = Comparator.comparingInt(SimpleTransaction::getTimestamp).reversed();
Comparator<SimpleTransaction> newestTimestampFirstComparator = Comparator.comparingLong(SimpleTransaction::getTimestamp).reversed();
// Update cache and return
transactionsCacheTimestamp = NTP.getTime();
@@ -537,7 +537,8 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// All inputs and outputs relate to this wallet, so the balance should be unaffected
amount = 0;
}
return new SimpleTransaction(t.txHash, t.timestamp, amount, fee, inputs, outputs);
long timestampMillis = t.timestamp * 1000L;
return new SimpleTransaction(t.txHash, timestampMillis, amount, fee, inputs, outputs);
}
/**

View File

@@ -7,7 +7,7 @@ import java.util.List;
@XmlAccessorType(XmlAccessType.FIELD)
public class SimpleTransaction {
private String txHash;
private Integer timestamp;
private Long timestamp;
private long totalAmount;
private long feeAmount;
private List<Input> inputs;
@@ -74,7 +74,7 @@ public class SimpleTransaction {
public SimpleTransaction() {
}
public SimpleTransaction(String txHash, Integer timestamp, long totalAmount, long feeAmount, List<Input> inputs, List<Output> outputs) {
public SimpleTransaction(String txHash, Long timestamp, long totalAmount, long feeAmount, List<Input> inputs, List<Output> outputs) {
this.txHash = txHash;
this.timestamp = timestamp;
this.totalAmount = totalAmount;
@@ -87,7 +87,7 @@ public class SimpleTransaction {
return txHash;
}
public Integer getTimestamp() {
public Long getTimestamp() {
return timestamp;
}

View File

@@ -1,99 +0,0 @@
package org.qortal.crypto;
import java.lang.reflect.Constructor;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.Arrays;
import org.bouncycastle.crypto.Digest;
import org.bouncycastle.math.ec.rfc7748.X25519;
import org.bouncycastle.math.ec.rfc7748.X25519Field;
import org.bouncycastle.math.ec.rfc8032.Ed25519;
/** Additions to BouncyCastle providing Ed25519 to X25519 key conversion. */
public class BouncyCastle25519 {
private static final Class<?> pointAffineClass;
private static final Constructor<?> pointAffineCtor;
private static final Method decodePointVarMethod;
private static final Field yField;
static {
try {
Class<?> ed25519Class = Ed25519.class;
pointAffineClass = Arrays.stream(ed25519Class.getDeclaredClasses()).filter(clazz -> clazz.getSimpleName().equals("PointAffine")).findFirst().get();
if (pointAffineClass == null)
throw new ClassNotFoundException("Can't locate PointExt inner class inside Ed25519");
decodePointVarMethod = ed25519Class.getDeclaredMethod("decodePointVar", byte[].class, int.class, boolean.class, pointAffineClass);
decodePointVarMethod.setAccessible(true);
pointAffineCtor = pointAffineClass.getDeclaredConstructors()[0];
pointAffineCtor.setAccessible(true);
yField = pointAffineClass.getDeclaredField("y");
yField.setAccessible(true);
} catch (NoSuchMethodException | SecurityException | IllegalArgumentException | NoSuchFieldException | ClassNotFoundException e) {
throw new RuntimeException("Can't initialize BouncyCastle25519 shim", e);
}
}
private static int[] obtainYFromPublicKey(byte[] ed25519PublicKey) {
try {
Object pA = pointAffineCtor.newInstance();
Boolean result = (Boolean) decodePointVarMethod.invoke(null, ed25519PublicKey, 0, true, pA);
if (result == null || !result)
return null;
return (int[]) yField.get(pA);
} catch (SecurityException | InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
throw new RuntimeException("Can't reflect into BouncyCastle", e);
}
}
public static byte[] toX25519PublicKey(byte[] ed25519PublicKey) {
int[] one = new int[X25519Field.SIZE];
X25519Field.one(one);
int[] y = obtainYFromPublicKey(ed25519PublicKey);
int[] oneMinusY = new int[X25519Field.SIZE];
X25519Field.sub(one, y, oneMinusY);
int[] onePlusY = new int[X25519Field.SIZE];
X25519Field.add(one, y, onePlusY);
int[] oneMinusYInverted = new int[X25519Field.SIZE];
X25519Field.inv(oneMinusY, oneMinusYInverted);
int[] u = new int[X25519Field.SIZE];
X25519Field.mul(onePlusY, oneMinusYInverted, u);
X25519Field.normalize(u);
byte[] x25519PublicKey = new byte[X25519.SCALAR_SIZE];
X25519Field.encode(u, x25519PublicKey, 0);
return x25519PublicKey;
}
public static byte[] toX25519PrivateKey(byte[] ed25519PrivateKey) {
Digest d = Ed25519.createPrehash();
byte[] h = new byte[d.getDigestSize()];
d.update(ed25519PrivateKey, 0, ed25519PrivateKey.length);
d.doFinal(h, 0);
byte[] s = new byte[X25519.SCALAR_SIZE];
System.arraycopy(h, 0, s, 0, X25519.SCALAR_SIZE);
s[0] &= 0xF8;
s[X25519.SCALAR_SIZE - 1] &= 0x7F;
s[X25519.SCALAR_SIZE - 1] |= 0x40;
return s;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -253,6 +253,10 @@ public abstract class Crypto {
return false;
}
public static byte[] toPublicKey(byte[] privateKey) {
return new Ed25519PrivateKeyParameters(privateKey, 0).generatePublicKey().getEncoded();
}
public static boolean verify(byte[] publicKey, byte[] signature, byte[] message) {
try {
return Ed25519.verify(signature, 0, publicKey, 0, message, 0, message.length);
@@ -264,16 +268,24 @@ public abstract class Crypto {
public static byte[] sign(Ed25519PrivateKeyParameters edPrivateKeyParams, byte[] message) {
byte[] signature = new byte[SIGNATURE_LENGTH];
edPrivateKeyParams.sign(Ed25519.Algorithm.Ed25519, edPrivateKeyParams.generatePublicKey(), null, message, 0, message.length, signature, 0);
edPrivateKeyParams.sign(Ed25519.Algorithm.Ed25519,null, message, 0, message.length, signature, 0);
return signature;
}
public static byte[] sign(byte[] privateKey, byte[] message) {
byte[] signature = new byte[SIGNATURE_LENGTH];
new Ed25519PrivateKeyParameters(privateKey, 0).sign(Ed25519.Algorithm.Ed25519,null, message, 0, message.length, signature, 0);
return signature;
}
public static byte[] getSharedSecret(byte[] privateKey, byte[] publicKey) {
byte[] x25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(privateKey);
byte[] x25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(privateKey);
X25519PrivateKeyParameters xPrivateKeyParams = new X25519PrivateKeyParameters(x25519PrivateKey, 0);
byte[] x25519PublicKey = BouncyCastle25519.toX25519PublicKey(publicKey);
byte[] x25519PublicKey = Qortal25519Extras.toX25519PublicKey(publicKey);
X25519PublicKeyParameters xPublicKeyParams = new X25519PublicKeyParameters(x25519PublicKey, 0);
byte[] sharedSecret = new byte[SHARED_SECRET_LENGTH];
@@ -281,5 +293,4 @@ public abstract class Crypto {
return sharedSecret;
}
}

View File

@@ -0,0 +1,234 @@
package org.qortal.crypto;
import org.bouncycastle.crypto.Digest;
import org.bouncycastle.crypto.digests.SHA512Digest;
import org.bouncycastle.math.ec.rfc7748.X25519;
import org.bouncycastle.math.ec.rfc7748.X25519Field;
import org.bouncycastle.math.ec.rfc8032.Ed25519;
import org.bouncycastle.math.raw.Nat256;
import java.security.SecureRandom;
import java.util.Arrays;
import java.util.Collection;
/**
* Additions to BouncyCastle providing:
* <p></p>
* <ul>
* <li>Ed25519 to X25519 key conversion</li>
* <li>Aggregate public keys</li>
* <li>Aggregate signatures</li>
* </ul>
*/
public abstract class Qortal25519Extras extends BouncyCastleEd25519 {
private static final SecureRandom SECURE_RANDOM = new SecureRandom();
public static byte[] toX25519PublicKey(byte[] ed25519PublicKey) {
int[] one = new int[X25519Field.SIZE];
X25519Field.one(one);
PointAffine pA = new PointAffine();
if (!decodePointVar(ed25519PublicKey, 0, true, pA))
return null;
int[] y = pA.y;
int[] oneMinusY = new int[X25519Field.SIZE];
X25519Field.sub(one, y, oneMinusY);
int[] onePlusY = new int[X25519Field.SIZE];
X25519Field.add(one, y, onePlusY);
int[] oneMinusYInverted = new int[X25519Field.SIZE];
X25519Field.inv(oneMinusY, oneMinusYInverted);
int[] u = new int[X25519Field.SIZE];
X25519Field.mul(onePlusY, oneMinusYInverted, u);
X25519Field.normalize(u);
byte[] x25519PublicKey = new byte[X25519.SCALAR_SIZE];
X25519Field.encode(u, x25519PublicKey, 0);
return x25519PublicKey;
}
public static byte[] toX25519PrivateKey(byte[] ed25519PrivateKey) {
Digest d = Ed25519.createPrehash();
byte[] h = new byte[d.getDigestSize()];
d.update(ed25519PrivateKey, 0, ed25519PrivateKey.length);
d.doFinal(h, 0);
byte[] s = new byte[X25519.SCALAR_SIZE];
System.arraycopy(h, 0, s, 0, X25519.SCALAR_SIZE);
s[0] &= 0xF8;
s[X25519.SCALAR_SIZE - 1] &= 0x7F;
s[X25519.SCALAR_SIZE - 1] |= 0x40;
return s;
}
// Mostly for test support
public static PointAccum newPointAccum() {
return new PointAccum();
}
public static byte[] aggregatePublicKeys(Collection<byte[]> publicKeys) {
PointAccum rAccum = null;
for (byte[] publicKey : publicKeys) {
PointAffine pA = new PointAffine();
if (!decodePointVar(publicKey, 0, false, pA))
// Failed to decode
return null;
if (rAccum == null) {
rAccum = new PointAccum();
pointCopy(pA, rAccum);
} else {
pointAdd(pointCopy(pA), rAccum);
}
}
byte[] publicKey = new byte[SCALAR_BYTES];
if (0 == encodePoint(rAccum, publicKey, 0))
// Failed to encode
return null;
return publicKey;
}
public static byte[] aggregateSignatures(Collection<byte[]> signatures) {
// Signatures are (R, s)
// R is a point
// s is a scalar
PointAccum rAccum = null;
int[] sAccum = new int[SCALAR_INTS];
byte[] rEncoded = new byte[POINT_BYTES];
int[] sPart = new int[SCALAR_INTS];
for (byte[] signature : signatures) {
System.arraycopy(signature,0, rEncoded, 0, rEncoded.length);
PointAffine pA = new PointAffine();
if (!decodePointVar(rEncoded, 0, false, pA))
// Failed to decode
return null;
if (rAccum == null) {
rAccum = new PointAccum();
pointCopy(pA, rAccum);
decode32(signature, rEncoded.length, sAccum, 0, SCALAR_INTS);
} else {
pointAdd(pointCopy(pA), rAccum);
decode32(signature, rEncoded.length, sPart, 0, SCALAR_INTS);
Nat256.addTo(sPart, sAccum);
// "mod L" on sAccum
if (Nat256.gte(sAccum, L))
Nat256.subFrom(L, sAccum);
}
}
byte[] signature = new byte[SIGNATURE_SIZE];
if (0 == encodePoint(rAccum, signature, 0))
// Failed to encode
return null;
for (int i = 0; i < sAccum.length; ++i) {
encode32(sAccum[i], signature, POINT_BYTES + i * 4);
}
return signature;
}
public static byte[] signForAggregation(byte[] privateKey, byte[] message) {
// Very similar to BouncyCastle's implementation except we use secure random nonce and different hash
Digest d = new SHA512Digest();
byte[] h = new byte[d.getDigestSize()];
d.reset();
d.update(privateKey, 0, privateKey.length);
d.doFinal(h, 0);
byte[] sH = new byte[SCALAR_BYTES];
pruneScalar(h, 0, sH);
byte[] publicKey = new byte[SCALAR_BYTES];
scalarMultBaseEncoded(sH, publicKey, 0);
byte[] rSeed = new byte[d.getDigestSize()];
SECURE_RANDOM.nextBytes(rSeed);
byte[] r = new byte[SCALAR_BYTES];
pruneScalar(rSeed, 0, r);
byte[] R = new byte[POINT_BYTES];
scalarMultBaseEncoded(r, R, 0);
d.reset();
d.update(message, 0, message.length);
d.doFinal(h, 0);
byte[] k = reduceScalar(h);
byte[] s = calculateS(r, k, sH);
byte[] signature = new byte[SIGNATURE_SIZE];
System.arraycopy(R, 0, signature, 0, POINT_BYTES);
System.arraycopy(s, 0, signature, POINT_BYTES, SCALAR_BYTES);
return signature;
}
public static boolean verifyAggregated(byte[] publicKey, byte[] signature, byte[] message) {
byte[] R = Arrays.copyOfRange(signature, 0, POINT_BYTES);
byte[] s = Arrays.copyOfRange(signature, POINT_BYTES, POINT_BYTES + SCALAR_BYTES);
if (!checkPointVar(R))
// R out of bounds
return false;
if (!checkScalarVar(s))
// s out of bounds
return false;
byte[] S = new byte[POINT_BYTES];
scalarMultBaseEncoded(s, S, 0);
PointAffine pA = new PointAffine();
if (!decodePointVar(publicKey, 0, true, pA))
// Failed to decode
return false;
Digest d = new SHA512Digest();
byte[] h = new byte[d.getDigestSize()];
d.update(message, 0, message.length);
d.doFinal(h, 0);
byte[] k = reduceScalar(h);
int[] nS = new int[SCALAR_INTS];
decodeScalar(s, 0, nS);
int[] nA = new int[SCALAR_INTS];
decodeScalar(k, 0, nA);
/*PointAccum*/
PointAccum pR = new PointAccum();
scalarMultStrausVar(nS, nA, pA, pR);
byte[] check = new byte[POINT_BYTES];
if (0 == encodePoint(pR, check, 0))
// Failed to encode
return false;
return Arrays.equals(check, R);
}
}

View File

@@ -5,6 +5,7 @@ import java.util.Arrays;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlTransient;
import org.qortal.account.PublicKeyAccount;
@@ -16,6 +17,9 @@ public class OnlineAccountData {
protected byte[] signature;
protected byte[] publicKey;
@XmlTransient
private int hash;
// Constructors
// necessary for JAXB serialization
@@ -62,20 +66,23 @@ public class OnlineAccountData {
if (otherOnlineAccountData.timestamp != this.timestamp)
return false;
// Signature more likely to be unique than public key
if (!Arrays.equals(otherOnlineAccountData.signature, this.signature))
return false;
if (!Arrays.equals(otherOnlineAccountData.publicKey, this.publicKey))
return false;
// We don't compare signature because it's not our remit to verify and newer aggregate signatures use random nonces
return true;
}
@Override
public int hashCode() {
// Pretty lazy implementation
return (int) this.timestamp;
int h = this.hash;
if (h == 0) {
this.hash = h = Long.hashCode(this.timestamp)
^ Arrays.hashCode(this.publicKey);
// We don't use signature because newer aggregate signatures use random nonces
}
return h;
}
}

View File

@@ -469,6 +469,8 @@ public class Network {
class NetworkProcessor extends ExecuteProduceConsume {
private final Logger LOGGER = LogManager.getLogger(NetworkProcessor.class);
private final AtomicLong nextConnectTaskTimestamp = new AtomicLong(0L); // ms - try first connect once NTP syncs
private final AtomicLong nextBroadcastTimestamp = new AtomicLong(0L); // ms - try first broadcast once NTP syncs
@@ -1373,17 +1375,26 @@ public class Network {
// We attempted to connect within the last day
// but we last managed to connect over a week ago.
Predicate<PeerData> isNotOldPeer = peerData -> {
if (peerData.getLastAttempted() == null
|| peerData.getLastAttempted() < now - OLD_PEER_ATTEMPTED_PERIOD) {
// First check if there was a connection attempt within the last day
if (peerData.getLastAttempted() != null
&& peerData.getLastAttempted() > now - OLD_PEER_ATTEMPTED_PERIOD) {
// There was, so now check if we had a successful connection in the last 7 days
if (peerData.getLastConnected() != null
&& peerData.getLastConnected() > now - OLD_PEER_CONNECTION_PERIOD) {
// We did, so this is NOT an 'old' peer
return true;
}
// Last successful connection was more than 1 week ago - this is an 'old' peer
return false;
}
else {
// Best to wait until we have a connection attempt - assume not an 'old' peer until then
return true;
}
if (peerData.getLastConnected() == null
|| peerData.getLastConnected() > now - OLD_PEER_CONNECTION_PERIOD) {
return true;
}
return false;
};
// Disregard peers that are NOT 'old'

View File

@@ -0,0 +1,112 @@
package org.qortal.network.message;
import com.google.common.primitives.Longs;
import org.qortal.transform.Transformer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.*;
/**
* For requesting online accounts info from remote peer, given our list of online accounts.
* <p></p>
* Different format to V1 and V2:<br>
* <ul>
* <li>V1 is: number of entries, then timestamp + pubkey for each entry</li>
* <li>V2 is: groups of: number of entries, timestamp, then pubkey for each entry</li>
* <li>V3 is: groups of: timestamp, number of entries (one per leading byte), then hash(pubkeys) for each entry</li>
* </ul>
* <p></p>
* End
*/
public class GetOnlineAccountsV3Message extends Message {
private static final Map<Long, Map<Byte, byte[]>> EMPTY_ONLINE_ACCOUNTS = Collections.emptyMap();
private Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte;
public GetOnlineAccountsV3Message(Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte) {
super(MessageType.GET_ONLINE_ACCOUNTS_V3);
// If we don't have ANY online accounts then it's an easier construction...
if (hashesByTimestampThenByte.isEmpty()) {
this.dataBytes = EMPTY_DATA_BYTES;
return;
}
// We should know exactly how many bytes to allocate now
int byteSize = hashesByTimestampThenByte.size() * (Transformer.TIMESTAMP_LENGTH + Transformer.BYTE_LENGTH);
byteSize += hashesByTimestampThenByte.values()
.stream()
.mapToInt(map -> map.size() * Transformer.PUBLIC_KEY_LENGTH)
.sum();
ByteArrayOutputStream bytes = new ByteArrayOutputStream(byteSize);
// Warning: no double-checking/fetching! We must be ConcurrentMap compatible.
// So no contains() then get() or multiple get()s on the same key/map.
try {
for (var outerMapEntry : hashesByTimestampThenByte.entrySet()) {
bytes.write(Longs.toByteArray(outerMapEntry.getKey()));
var innerMap = outerMapEntry.getValue();
// Number of entries: 1 - 256, where 256 is represented by 0
bytes.write(innerMap.size() & 0xFF);
for (byte[] hashBytes : innerMap.values()) {
bytes.write(hashBytes);
}
}
} catch (IOException e) {
throw new AssertionError("IOException shouldn't occur with ByteArrayOutputStream");
}
this.dataBytes = bytes.toByteArray();
this.checksumBytes = Message.generateChecksum(this.dataBytes);
}
private GetOnlineAccountsV3Message(int id, Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte) {
super(id, MessageType.GET_ONLINE_ACCOUNTS_V3);
this.hashesByTimestampThenByte = hashesByTimestampThenByte;
}
public Map<Long, Map<Byte, byte[]>> getHashesByTimestampThenByte() {
return this.hashesByTimestampThenByte;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) {
// 'empty' case
if (!bytes.hasRemaining()) {
return new GetOnlineAccountsV3Message(id, EMPTY_ONLINE_ACCOUNTS);
}
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte = new HashMap<>();
while (bytes.hasRemaining()) {
long timestamp = bytes.getLong();
int hashCount = bytes.get();
if (hashCount <= 0)
// 256 is represented by 0.
// Also converts negative signed value (e.g. -1) to proper positive unsigned value (255)
hashCount += 256;
Map<Byte, byte[]> hashesByByte = new HashMap<>();
for (int i = 0; i < hashCount; ++i) {
byte[] publicKeyHash = new byte[Transformer.PUBLIC_KEY_LENGTH];
bytes.get(publicKeyHash);
hashesByByte.put(publicKeyHash[0], publicKeyHash);
}
hashesByTimestampThenByte.put(timestamp, hashesByByte);
}
return new GetOnlineAccountsV3Message(id, hashesByTimestampThenByte);
}
}

View File

@@ -46,6 +46,7 @@ public abstract class Message {
private static final int MAX_DATA_SIZE = 10 * 1024 * 1024; // 10MB
protected static final byte[] EMPTY_DATA_BYTES = new byte[0];
private static final ByteBuffer EMPTY_READ_ONLY_BYTE_BUFFER = ByteBuffer.wrap(EMPTY_DATA_BYTES).asReadOnlyBuffer();
protected int id;
protected final MessageType type;
@@ -126,7 +127,7 @@ public abstract class Message {
if (dataSize > 0 && dataSize + CHECKSUM_LENGTH > readOnlyBuffer.remaining())
return null;
ByteBuffer dataSlice = null;
ByteBuffer dataSlice = EMPTY_READ_ONLY_BYTE_BUFFER;
if (dataSize > 0) {
byte[] expectedChecksum = new byte[CHECKSUM_LENGTH];
readOnlyBuffer.get(expectedChecksum);

View File

@@ -46,6 +46,8 @@ public enum MessageType {
GET_ONLINE_ACCOUNTS(81, GetOnlineAccountsMessage::fromByteBuffer),
ONLINE_ACCOUNTS_V2(82, OnlineAccountsV2Message::fromByteBuffer),
GET_ONLINE_ACCOUNTS_V2(83, GetOnlineAccountsV2Message::fromByteBuffer),
// ONLINE_ACCOUNTS_V3(84, OnlineAccountsV3Message::fromByteBuffer),
GET_ONLINE_ACCOUNTS_V3(85, GetOnlineAccountsV3Message::fromByteBuffer),
ARBITRARY_DATA(90, ArbitraryDataMessage::fromByteBuffer),
GET_ARBITRARY_DATA(91, GetArbitraryDataMessage::fromByteBuffer),

View File

@@ -159,6 +159,9 @@ public interface AccountRepository {
/** Returns number of active reward-shares involving passed public key as the minting account only. */
public int countRewardShares(byte[] mintingAccountPublicKey) throws DataException;
/** Returns number of active self-shares involving passed public key as the minting account only. */
public int countSelfShares(byte[] mintingAccountPublicKey) throws DataException;
public List<RewardShareData> getRewardShares() throws DataException;
public List<RewardShareData> findRewardShares(List<String> mintingAccounts, List<String> recipientAccounts, List<String> involvedAddresses, Integer limit, Integer offset, Boolean reverse) throws DataException;

View File

@@ -688,6 +688,17 @@ public class HSQLDBAccountRepository implements AccountRepository {
}
}
@Override
public int countSelfShares(byte[] minterPublicKey) throws DataException {
String sql = "SELECT COUNT(*) FROM RewardShares WHERE minter_public_key = ? AND minter = recipient";
try (ResultSet resultSet = this.repository.checkedExecute(sql, minterPublicKey)) {
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to count self-shares in repository", e);
}
}
@Override
public List<RewardShareData> getRewardShares() throws DataException {
String sql = "SELECT minter_public_key, minter, recipient, share_percent, reward_share_public_key FROM RewardShares";

View File

@@ -23,7 +23,6 @@ import java.util.stream.Stream;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.globalization.Translator;
import org.qortal.gui.SysTray;
@@ -1003,7 +1002,7 @@ public class HSQLDBRepository implements Repository {
if (privateKey == null)
return null;
return PrivateKeyAccount.toPublicKey(privateKey);
return Crypto.toPublicKey(privateKey);
}
public static String ed25519PublicKeyToAddress(byte[] publicKey) {

View File

@@ -203,7 +203,7 @@ public class Settings {
private int maxRetries = 2;
/** Minimum peer version number required in order to sync with them */
private String minPeerVersion = "3.1.0";
private String minPeerVersion = "3.3.7";
/** Whether to allow connections with peers below minPeerVersion
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
* If false, sync will be blocked both ways, and they will not appear in the peers list */

View File

@@ -140,8 +140,21 @@ public class RewardShareTransaction extends Transaction {
// Check the minting account hasn't reach maximum number of reward-shares
int rewardShareCount = this.repository.getAccountRepository().countRewardShares(creator.getPublicKey());
if (rewardShareCount >= BlockChain.getInstance().getMaxRewardSharesPerMintingAccount())
int selfShareCount = this.repository.getAccountRepository().countSelfShares(creator.getPublicKey());
int maxRewardShares = BlockChain.getInstance().getMaxRewardSharesAtTimestamp(this.rewardShareTransactionData.getTimestamp());
if (creator.isFounder())
// Founders have a different limit
maxRewardShares = BlockChain.getInstance().getMaxRewardSharesPerFounderMintingAccount();
if (rewardShareCount >= maxRewardShares)
return ValidationResult.MAXIMUM_REWARD_SHARES;
// When filling all reward share slots, one must be a self share (after feature trigger timestamp)
if (this.rewardShareTransactionData.getTimestamp() >= BlockChain.getInstance().getRewardShareLimitTimestamp())
if (!isRecipientAlsoMinter && rewardShareCount == maxRewardShares-1 && selfShareCount == 0)
return ValidationResult.MAXIMUM_REWARD_SHARES;
} else {
// This transaction intends to modify/terminate an existing reward-share

View File

@@ -15,7 +15,11 @@
"minAccountLevelToMint": 1,
"minAccountLevelForBlockSubmissions": 5,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 6,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 1657382400000, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 43200000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -41,7 +45,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 9999999, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 7200, 64800, 129600, 172800, 244000, 345600, 518400, 691200, 864000, 1036800 ],
"blockTimingsByHeight": [
@@ -57,11 +64,12 @@
"atFindNextTransactionFix": 275000,
"newBlockSigHeight": 320000,
"shareBinFix": 399000,
"rewardShareLimitTimestamp": 1657382400000,
"calcChainWeightTimestamp": 1620579600000,
"newConsensusTimestamp": 1655654400000,
"transactionV5Timestamp": 1642176000000,
"transactionV6Timestamp": 9999999999999,
"disableReferenceTimestamp": 1655222400000
"disableReferenceTimestamp": 1655222400000,
"aggregateSignatureTimestamp": 1656864000000
},
"genesisInfo": {
"version": 4,

View File

@@ -6,15 +6,15 @@
### Common ###
JSON = JSON Nachricht konnte nicht geparst werden
INSUFFICIENT_BALANCE = insufficient balance
INSUFFICIENT_BALANCE = Kein Ausgleich
UNAUTHORIZED = API-Aufruf nicht autorisiert
REPOSITORY_ISSUE = Repository-Fehler
NON_PRODUCTION = this API call is not permitted for production systems
NON_PRODUCTION = Dieser APi-Aufruf ist nicht gestattet für Produtkion
BLOCKCHAIN_NEEDS_SYNC = blockchain needs to synchronize first
BLOCKCHAIN_NEEDS_SYNC = Blockchain muss sich erst verbinden
NO_TIME_SYNC = noch keine Uhrensynchronisation
@@ -68,16 +68,16 @@ ORDER_UNKNOWN = unbekannte asset order ID
GROUP_UNKNOWN = Gruppe unbekannt
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = fremde Blockchain oder ElectrumX Netzwerk Problem
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = unzureichend Bilanz auf fremde blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
FOREIGN_BLOCKCHAIN_TOO_SOON = zu früh um fremde Blockchain-Transaktionen zu übertragen (Sperrzeit/mittlere Blockzeit)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
ORDER_SIZE_TOO_SMALL = Bestellmenge zu niedrig
### Data ###
FILE_NOT_FOUND = Datei nicht gefunden
NO_REPLY = peer did not reply with data
NO_REPLY = Peer hat nicht mit Daten verbinden

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "ko",
### Common ###
JSON = JSON 메시지를 구문 분석하지 못했습니다.
INSUFFICIENT_BALANCE = 잔고 부족
UNAUTHORIZED = 승인되지 않은 API 호출
REPOSITORY_ISSUE = 리포지토리 오류
NON_PRODUCTION = 이 API 호출은 프로덕션 시스템에 허용되지 않습니다.
BLOCKCHAIN_NEEDS_SYNC = 블록체인이 먼저 동기화되어야 함
NO_TIME_SYNC = 아직 동기화가 없습니다.
### Validation ###
INVALID_SIGNATURE = 무효 서명
INVALID_ADDRESS = 잘못된 주소
INVALID_PUBLIC_KEY = 잘못된 공개 키
INVALID_DATA = 잘못된 데이터
INVALID_NETWORK_ADDRESS = 잘못된 네트워크 주소
ADDRESS_UNKNOWN = 계정 주소 알 수 없음
INVALID_CRITERIA = 잘못된 검색 기준
INVALID_REFERENCE = 무효 참조
TRANSFORMATION_ERROR = JSON을 트랜잭션으로 변환할 수 없습니다.
INVALID_PRIVATE_KEY = 잘못된 개인 키
INVALID_HEIGHT = 잘못된 블록 높이
CANNOT_MINT = 계정을 만들 수 없습니다.
### Blocks ###
BLOCK_UNKNOWN = 알 수 없는 블록
### Transactions ###
TRANSACTION_UNKNOWN = 알 수 없는 거래
PUBLIC_KEY_NOT_FOUND = 공개 키를 찾을 수 없음
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = 유효하지 않은 거래: %s (%s)
### Naming ###
NAME_UNKNOWN = 이름 미상
### Asset ###
INVALID_ASSET_ID = 잘못된 자산 ID
INVALID_ORDER_ID = 자산 주문 ID가 잘못되었습니다.
ORDER_UNKNOWN = 알 수 없는 자산 주문 ID
### Groups ###
GROUP_UNKNOWN = 알 수 없는 그룹
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = 외부 블록체인 또는 일렉트럼X 네트워크 문제
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = 외부 블록체인 잔액 부족
FOREIGN_BLOCKCHAIN_TOO_SOON = 외부 블록체인 트랜잭션을 브로드캐스트하기에는 너무 빠릅니다(LockTime/중앙 블록 시간).
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = 주문량이 너무 적다
### Data ###
FILE_NOT_FOUND = 파일을 찾을 수 없음
NO_REPLY = 피어가 허용된 시간 내에 응답하지 않음

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "ro",
### Comun ###
JSON = nu s-a reusit analizarea mesajului JSON
INSUFFICIENT_BALANCE = fonduri insuficiente
UNAUTHORIZED = Solicitare API neautorizata
REPOSITORY_ISSUE = eroare a depozitarului
NON_PRODUCTION = aceasta solictare API nu este permisa pentru sistemele de productie
BLOCKCHAIN_NEEDS_SYNC = blockchain-ul trebuie sa se sincronizeze mai intai
NO_TIME_SYNC = nu exista inca o sincronizare a ceasului
### Validation ###
INVALID_SIGNATURE = semnatura invalida
INVALID_ADDRESS = adresa invalida
INVALID_PUBLIC_KEY = cheie publica invalid
INVALID_DATA = date invalida
INVALID_NETWORK_ADDRESS = invalid network address
ADDRESS_UNKNOWN = adresa contului necunoscuta
INVALID_CRITERIA = criteriu de cautare invalid
INVALID_REFERENCE = referinta invalida
TRANSFORMATION_ERROR = nu s-a putut transforma JSON in tranzactie
INVALID_PRIVATE_KEY = invalid private key
INVALID_HEIGHT = dimensiunea blocului invalida
CANNOT_MINT = contul nu poate produce moneda
### Blocks ###
BLOCK_UNKNOWN = bloc necunoscut
### Transactions ###
TRANSACTION_UNKNOWN = tranzactie necunoscuta
PUBLIC_KEY_NOT_FOUND = nu s-a gasit cheia publica
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = tranzactie invalida: %s (%s)
### Naming ###
NAME_UNKNOWN = nume necunoscut
### Asset ###
INVALID_ASSET_ID = ID active invalid
INVALID_ORDER_ID = ID-ul de comanda al activului invalid
ORDER_UNKNOWN = ID necunoscut al comenzii activului
### Groups ###
GROUP_UNKNOWN = grup necunoscut
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = problema de blockchain strain sau de retea ElectrumX
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = sold insuficient pe blockchain strain
FOREIGN_BLOCKCHAIN_TOO_SOON = prea devreme pentru a difuza o tranzactie blockchain straina (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = valoarea tranzactiei este prea mica
### Data ###
FILE_NOT_FOUND = nu s-a gasit fisierul
NO_REPLY = omologul nu a raspuns in termenul stabilit

View File

@@ -0,0 +1,46 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
APPLYING_UPDATE_AND_RESTARTING = 자동 업데이트를 적용하고 다시 시작하는 중...
AUTO_UPDATE = 자동 업데이트
BLOCK_HEIGHT = 높이
BUILD_VERSION = 빌드 버전
CHECK_TIME_ACCURACY = 시간 정확도 점검
CONNECTING = 연결하는
CONNECTION = 연결
CONNECTIONS = 연결
CREATING_BACKUP_OF_DB_FILES = 데이터베이스 파일의 백업을 만드는 중...
DB_BACKUP = Database Backup
DB_CHECKPOINT = Database Checkpoint
DB_MAINTENANCE = 데이터베이스 유지 관리
EXIT = 종료
LITE_NODE = 라이트 노드
MINTING_DISABLED = 민팅중이 아님
MINTING_ENABLED = \u2714 민팅
OPEN_UI = UI 열기
PERFORMING_DB_CHECKPOINT = 커밋되지 않은 데이터베이스 변경 내용을 저장하는 중...
PERFORMING_DB_MAINTENANCE = 예약된 유지 관리 수행 중...
SYNCHRONIZE_CLOCK = 시간 동기화
SYNCHRONIZING_BLOCKCHAIN = 동기화중
SYNCHRONIZING_CLOCK = 시간 동기화

View File

@@ -0,0 +1,46 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
APPLYING_UPDATE_AND_RESTARTING = Aplicarea actualizarii automate si repornire...
AUTO_UPDATE = Actualizare automata
BLOCK_HEIGHT = dimensiune
BUILD_VERSION = versiunea compilatiei
CHECK_TIME_ACCURACY = verificare exactitate ora
CONNECTING = Se conecteaza
CONNECTION = conexiune
CONNECTIONS = conexiuni
CREATING_BACKUP_OF_DB_FILES = Se creaza copia bazei de date
DB_BACKUP = Copie baza de date
DB_CHECKPOINT = Punct de control al bazei de date
DB_MAINTENANCE = Database Maintenance
EXIT = iesire
LITE_NODE = Nod Lite
MINTING_DISABLED = nu produce moneda
MINTING_ENABLED = \u2714 Minting
OPEN_UI = Deschidere interfata utilizator IU
PERFORMING_DB_CHECKPOINT = Salvarea modificarilor nerealizate ale bazei de date...
PERFORMING_DB_MAINTENANCE = Efectuarea intretinerii programate<74>
SYNCHRONIZE_CLOCK = Sincronizare ceas
SYNCHRONIZING_BLOCKCHAIN = Sincronizare
SYNCHRONIZING_CLOCK = Se sincronizeaza ceasul

View File

@@ -25,7 +25,7 @@ DB_CHECKPOINT = Databaskontrollpunkt
DB_MAINTENANCE = Databasunderhåll
EXIT = Utgång
EXIT = Avsluta
MINTING_DISABLED = Präglar INTE

View File

@@ -0,0 +1,195 @@
#
ACCOUNT_ALREADY_EXISTS = 계정이 이미 존재합니다.
ACCOUNT_CANNOT_REWARD_SHARE = 계정이 보상을 공유할 수 없습니다.
ADDRESS_ABOVE_RATE_LIMIT = 주소가 지정된 속도 제한에 도달했습니다.
ADDRESS_BLOCKED = 이 주소는 차단되었습니다.
ALREADY_GROUP_ADMIN = 이미 그룹 관리자
ALREADY_GROUP_MEMBER = 이미 그룹 맴버
ALREADY_VOTED_FOR_THAT_OPTION = 이미 그 옵션에 투표했다.
ASSET_ALREADY_EXISTS = 자산이 이미 있습니다.
ASSET_DOES_NOT_EXIST = 자산이 존재하지 않습니다.
ASSET_DOES_NOT_MATCH_AT = 자산이 AT의 자산과 일치하지 않습니다.
ASSET_NOT_SPENDABLE = 자산을 사용할 수 없습니다.
AT_ALREADY_EXISTS = AT가 이미 있습니다.
AT_IS_FINISHED = AT가 완료되었습니다.
AT_UNKNOWN = 알 수 없는 AT
BAN_EXISTS = 금지가 이미 있습니다.
BAN_UNKNOWN = 금지 알 수 없음
BANNED_FROM_GROUP = 그룹에서 금지
BUYER_ALREADY_OWNER = 구매자는 이미 소유자입니다
CLOCK_NOT_SYNCED = 동기화되지 않은 시간
DUPLICATE_MESSAGE = 주소가 중복 메시지를 보냈습니다.
DUPLICATE_OPTION = 중복 옵션
GROUP_ALREADY_EXISTS = 그룹이 이미 존재합니다
GROUP_APPROVAL_DECIDED = 그룹 승인이 이미 결정되었습니다.
GROUP_APPROVAL_NOT_REQUIRED = 그룹 승인이 필요하지 않음
GROUP_DOES_NOT_EXIST = 그룹이 존재하지 않습니다
GROUP_ID_MISMATCH = 그룹 ID 불일치
GROUP_OWNER_CANNOT_LEAVE = 그룹 소유자는 그룹을 나갈 수 없습니다
HAVE_EQUALS_WANT = 소유 자산은 원하는 자산과 동일합니다.
INCORRECT_NONCE = 잘못된 PoW nonce
INSUFFICIENT_FEE = 부족한 수수료
INVALID_ADDRESS = 잘못된 주소
INVALID_AMOUNT = 유효하지 않은 금액
INVALID_ASSET_OWNER = 잘못된 자산 소유자
INVALID_AT_TRANSACTION = 유효하지 않은 AT 거래
INVALID_AT_TYPE_LENGTH = 잘못된 AT '유형' 길이
INVALID_BUT_OK = 유효하지 않지만 OK
INVALID_CREATION_BYTES = 잘못된 생성 바이트
INVALID_DATA_LENGTH = 잘못된 데이터 길이
INVALID_DESCRIPTION_LENGTH = 잘못된 설명 길이
INVALID_GROUP_APPROVAL_THRESHOLD = 잘못된 그룹 승인 임계값
INVALID_GROUP_BLOCK_DELAY = 잘못된 그룹 승인 차단 지연
INVALID_GROUP_ID = 잘못된 그룹 ID
INVALID_GROUP_OWNER = 잘못된 그룹 소유자
INVALID_LIFETIME = 유효하지 않은 수명
INVALID_NAME_LENGTH = 잘못된 이름 길이
INVALID_NAME_OWNER = 잘못된 이름 소유자
INVALID_OPTION_LENGTH = 잘못된 옵션 길이
INVALID_OPTIONS_COUNT = 잘못된 옵션 수
INVALID_ORDER_CREATOR = 잘못된 주문 생성자
INVALID_PAYMENTS_COUNT = 유효하지 않은 지불 수
INVALID_PUBLIC_KEY = 잘못된 공개 키
INVALID_QUANTITY = 유효하지 않은 수량
INVALID_REFERENCE = 잘못된 참조
INVALID_RETURN = 무효 반환
INVALID_REWARD_SHARE_PERCENT = 잘못된 보상 공유 비율
INVALID_SELLER = 무효 판매자
INVALID_TAGS_LENGTH = invalid 'tags' length
INVALID_TIMESTAMP_SIGNATURE = 유효하지 않은 타임스탬프 서명
INVALID_TX_GROUP_ID = 잘못된 트랜잭션 그룹 ID
INVALID_VALUE_LENGTH = 잘못된 '값' 길이
INVITE_UNKNOWN = 알 수 없는 그룹 초대
JOIN_REQUEST_EXISTS = 그룹 가입 요청이 이미 있습니다.
MAXIMUM_REWARD_SHARES = 이미 이 계정에 대한 최대 보상 공유 수에 도달했습니다.t
MISSING_CREATOR = 실종된 창작자
MULTIPLE_NAMES_FORBIDDEN = 계정당 여러 등록 이름은 금지되어 있습니다.
NAME_ALREADY_FOR_SALE = 이미 판매 중인 이름
NAME_ALREADY_REGISTERED = 이미 등록된 이름
NAME_BLOCKED = 이 이름은 차단되었습니다
NAME_DOES_NOT_EXIST = 이름이 존재하지 않습니다
NAME_NOT_FOR_SALE = 이름은 판매용이 아닙니다
NAME_NOT_NORMALIZED = 유니코드 '정규화된' 형식이 아닌 이름
NEGATIVE_AMOUNT = 유효하지 않은/음수 금액
NEGATIVE_FEE = 무효/음수 수수료
NEGATIVE_PRICE = 유효하지 않은/음수 가격
NO_BALANCE = 잔액 불충분
NO_BLOCKCHAIN_LOCK = 노드의 블록체인이 현재 사용 중입니다.
NO_FLAG_PERMISSION = 계정에 해당 권한이 없습니다
NOT_GROUP_ADMIN = 계정은 그룹 관리자가 아닙니다.
NOT_GROUP_MEMBER = 계정이 그룹 구성원이 아닙니다.
NOT_MINTING_ACCOUNT = 계정은 발행할 수 없습니다
NOT_YET_RELEASED = 아직 출시되지 않은 기능
OK = OK
ORDER_ALREADY_CLOSED = 아직 출시되지 않은 기능
ORDER_DOES_NOT_EXIST = 자산 거래 주문이 존재하지 않습니다
POLL_ALREADY_EXISTS = 설문조사가 이미 존재합니다
POLL_DOES_NOT_EXIST = 설문조사가 존재하지 않습니다
POLL_OPTION_DOES_NOT_EXIST = 투표 옵션이 존재하지 않습니다
PUBLIC_KEY_UNKNOWN = 공개 키 알 수 없음
REWARD_SHARE_UNKNOWN = 알 수 없는 보상 공유
SELF_SHARE_EXISTS = 자체 공유(보상 공유)가 이미 존재합니다.
TIMESTAMP_TOO_NEW = 타임스탬프가 너무 새롭습니다.
TIMESTAMP_TOO_OLD = 너무 오래된 타임스탬프
TOO_MANY_UNCONFIRMED = 계정에 보류 중인 확인되지 않은 거래가 너무 많습니다.
TRANSACTION_ALREADY_CONFIRMED = 거래가 이미 확인되었습니다
TRANSACTION_ALREADY_EXISTS = 거래가 이미 존재합니다
TRANSACTION_UNKNOWN = 알 수 없는 거래
TX_GROUP_ID_MISMATCH = 트랜잭션의 그룹 ID가 일치하지 않습니다

View File

@@ -0,0 +1,195 @@
#
ACCOUNT_ALREADY_EXISTS = contul exista deja
ACCOUNT_CANNOT_REWARD_SHARE = contul nu poate genera reward-share
ADDRESS_ABOVE_RATE_LIMIT = adresa a atins limita specificata
ADDRESS_BLOCKED = aceasta adresa este blocata
ALREADY_GROUP_ADMIN = sunteti deja admin
ALREADY_GROUP_MEMBER = sunteti deja membru
ALREADY_VOTED_FOR_THAT_OPTION = deja ati votat pentru aceasta optiune
ASSET_ALREADY_EXISTS = activul deja exista
ASSET_DOES_NOT_EXIST = activul un exista
ASSET_DOES_NOT_MATCH_AT = activul nu se potriveste cu activul TA
ASSET_NOT_SPENDABLE = activul nu poate fi utilizat
AT_ALREADY_EXISTS = TA exista deja
AT_IS_FINISHED = TA s-a terminat
AT_UNKNOWN = TA necunoscuta
BAN_EXISTS = ban-ul este deja folosit
BAN_UNKNOWN = ban necunoscut
BANNED_FROM_GROUP = accesul la grup a fost blocat
BUYER_ALREADY_OWNER = cumparatorul este deja detinator
CLOCK_NOT_SYNCED = ceasul nu este sincronizat
DUPLICATE_MESSAGE = adresa a trimis mesaje duplicate
DUPLICATE_OPTION = optiune duplicata
GROUP_ALREADY_EXISTS = grupul deja exista
GROUP_APPROVAL_DECIDED = aprobarea grupului a fost deja decisa
GROUP_APPROVAL_NOT_REQUIRED = aprobarea grupului nu este solicitata
GROUP_DOES_NOT_EXIST = grupul nu exista
GROUP_ID_MISMATCH = ID-ul grupului incorect
GROUP_OWNER_CANNOT_LEAVE = proprietarul grupului nu poate parasi grupul
HAVE_EQUALS_WANT = a avea un obiect este acelasi lucru cu a vrea un obiect
INCORRECT_NONCE = numar PoW incorect
INSUFFICIENT_FEE = taxa insuficienta
INVALID_ADDRESS = adresa invalida
INVALID_AMOUNT = suma invalida
INVALID_ASSET_OWNER = propietar al activului invalid
INVALID_AT_TRANSACTION = tranzactie automata invalida
INVALID_AT_TYPE_LENGTH = TA invalida 'tip' lungime
INVALID_BUT_OK = invalid dar OK
INVALID_CREATION_BYTES = octeti de creatie invalizi
INVALID_DATA_LENGTH = lungimea datelor invalida
INVALID_DESCRIPTION_LENGTH = lungimea descrierii invalida
INVALID_GROUP_APPROVAL_THRESHOLD = prag de aprobare a grupului invalid
INVALID_GROUP_BLOCK_DELAY = intarziere invalida a blocului de aprobare a grupului
INVALID_GROUP_ID = ID de grup invalid
INVALID_GROUP_OWNER = proprietar de grup invalid
INVALID_LIFETIME = durata de viata invalida
INVALID_NAME_LENGTH = lungimea numelui invalida
INVALID_NAME_OWNER = numele proprietarului invalid
INVALID_OPTION_LENGTH = lungimea optiunii invalida
INVALID_OPTIONS_COUNT = contor de optiuni invalid
INVALID_ORDER_CREATOR = creator de ordine invalid
INVALID_PAYMENTS_COUNT = contor de plati invalid
INVALID_PUBLIC_KEY = cheie publica invalida
INVALID_QUANTITY = cantitate invalida
INVALID_REFERENCE = referinta invalida
INVALID_RETURN = returnare invalida
INVALID_REWARD_SHARE_PERCENT = procentaj al cotei de recompensa invalid
INVALID_SELLER = vanzator invalid
INVALID_TAGS_LENGTH = lungime a tagurilor invalida
INVALID_TIMESTAMP_SIGNATURE = semnatura timestamp invalida
INVALID_TX_GROUP_ID = ID-ul grupului de tranzactii invalid
INVALID_VALUE_LENGTH = lungimea "valorii "invalida
INVITE_UNKNOWN = invitatie de grup invalida
JOIN_REQUEST_EXISTS = cererea de aderare la grup exista deja
MAXIMUM_REWARD_SHARES = ati ajuns deja la numarul maxim de cote de recompensa pentru acest cont
MISSING_CREATOR = creator lipsa
MULTIPLE_NAMES_FORBIDDEN = este interzisa folosirea mai multor nume inregistrate pe cont
NAME_ALREADY_FOR_SALE = numele este deja de vanzare
NAME_ALREADY_REGISTERED = nume deja inregistrat
NAME_BLOCKED = numele este blocat
NAME_DOES_NOT_EXIST = numele nu exista
NAME_NOT_FOR_SALE = numele nu este de vanzare
NAME_NOT_NORMALIZED = numele nu este in forma "normalizata" Unicode
NEGATIVE_AMOUNT = suma invalida/negativa
NEGATIVE_FEE = taxa invalida/negativa
NEGATIVE_PRICE = pret invalid/negativ
NO_BALANCE = fonduri insuficiente
NO_BLOCKCHAIN_LOCK = nodul blochain-ului este momentan ocupat
NO_FLAG_PERMISSION = contul nu are aceasta permisiune
NOT_GROUP_ADMIN = contul nu este un administrator de grup
NOT_GROUP_MEMBER = contul nu este un membru al grupului
NOT_MINTING_ACCOUNT = contul nu poate genera moneda Qort
NOT_YET_RELEASED = caracteristica nu este inca disponibila
OK = OK
ORDER_ALREADY_CLOSED = ordinul de tranzactionare a activului este deja inchis
ORDER_DOES_NOT_EXIST = ordinul de comercializare a activului nu exista
POLL_ALREADY_EXISTS = sondajul exista deja
POLL_DOES_NOT_EXIST = sondajul nu exista
POLL_OPTION_DOES_NOT_EXIST = optiunea de sondaj nu exista
PUBLIC_KEY_UNKNOWN = cheie publica necunoscuta
REWARD_SHARE_UNKNOWN = cheie de cota de recompensa necunoscuta
SELF_SHARE_EXISTS = cota personala (cota de recompensa) exista deja
TIMESTAMP_TOO_NEW = timestamp prea nou
TIMESTAMP_TOO_OLD = timestamp prea vechi
TOO_MANY_UNCONFIRMED = contul are prea multe tranzactii neconfirmate in asteptare
TRANZACTIE_DEJA_CONFIRMATA = tranzactia a fost deja confirmata
TRANSACTION_ALREADY_EXISTS = tranzactia exista deja
TRANSACTION_UNKNOWN = tranzactie necunoscuta
TX_GROUP_ID_MISMATCH = ID-ul de grup al tranzactiei nu se potriveste

View File

@@ -4,7 +4,7 @@ import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.block.BlockChain;
import org.qortal.crypto.AES;
import org.qortal.crypto.BouncyCastle25519;
import org.qortal.crypto.Qortal25519Extras;
import org.qortal.crypto.Crypto;
import org.qortal.test.common.Common;
import org.qortal.utils.Base58;
@@ -123,14 +123,14 @@ public class CryptoTests extends Common {
random.nextBytes(ed25519PrivateKey);
PrivateKeyAccount account = new PrivateKeyAccount(null, ed25519PrivateKey);
byte[] x25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(account.getPrivateKey());
byte[] x25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(account.getPrivateKey());
X25519PrivateKeyParameters x25519PrivateKeyParams = new X25519PrivateKeyParameters(x25519PrivateKey, 0);
// Derive X25519 public key from X25519 private key
byte[] x25519PublicKeyFromPrivate = x25519PrivateKeyParams.generatePublicKey().getEncoded();
// Derive X25519 public key from Ed25519 public key
byte[] x25519PublicKeyFromEd25519 = BouncyCastle25519.toX25519PublicKey(account.getPublicKey());
byte[] x25519PublicKeyFromEd25519 = Qortal25519Extras.toX25519PublicKey(account.getPublicKey());
assertEquals(String.format("Public keys do not match, from private key %s", Base58.encode(ed25519PrivateKey)), Base58.encode(x25519PublicKeyFromPrivate), Base58.encode(x25519PublicKeyFromEd25519));
}
@@ -162,10 +162,10 @@ public class CryptoTests extends Common {
}
private static byte[] calcBCSharedSecret(byte[] ed25519PrivateKey, byte[] ed25519PublicKey) {
byte[] x25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(ed25519PrivateKey);
byte[] x25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(ed25519PrivateKey);
X25519PrivateKeyParameters privateKeyParams = new X25519PrivateKeyParameters(x25519PrivateKey, 0);
byte[] x25519PublicKey = BouncyCastle25519.toX25519PublicKey(ed25519PublicKey);
byte[] x25519PublicKey = Qortal25519Extras.toX25519PublicKey(ed25519PublicKey);
X25519PublicKeyParameters publicKeyParams = new X25519PublicKeyParameters(x25519PublicKey, 0);
byte[] sharedSecret = new byte[32];
@@ -186,10 +186,10 @@ public class CryptoTests extends Common {
final String expectedTheirX25519PublicKey = "ANjnZLRSzW9B1aVamiYGKP3XtBooU9tGGDjUiibUfzp2";
final String expectedSharedSecret = "DTMZYG96x8XZuGzDvHFByVLsXedimqtjiXHhXPVe58Ap";
byte[] ourX25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(ourPrivateKey);
byte[] ourX25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(ourPrivateKey);
assertEquals("X25519 private key incorrect", expectedOurX25519PrivateKey, Base58.encode(ourX25519PrivateKey));
byte[] theirX25519PublicKey = BouncyCastle25519.toX25519PublicKey(theirPublicKey);
byte[] theirX25519PublicKey = Qortal25519Extras.toX25519PublicKey(theirPublicKey);
assertEquals("X25519 public key incorrect", expectedTheirX25519PublicKey, Base58.encode(theirX25519PublicKey));
byte[] sharedSecret = calcBCSharedSecret(ourPrivateKey, theirPublicKey);

View File

@@ -0,0 +1,190 @@
package org.qortal.test;
import com.google.common.hash.HashCode;
import com.google.common.primitives.Bytes;
import com.google.common.primitives.Longs;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.junit.Test;
import org.qortal.crypto.Qortal25519Extras;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.transform.Transformer;
import java.math.BigInteger;
import java.security.SecureRandom;
import java.security.Security;
import java.util.*;
import java.util.stream.Collectors;
import static org.junit.Assert.*;
public class SchnorrTests extends Qortal25519Extras {
static {
// This must go before any calls to LogManager/Logger
System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");
Security.insertProviderAt(new BouncyCastleProvider(), 0);
Security.insertProviderAt(new BouncyCastleJsseProvider(), 1);
}
private static final SecureRandom SECURE_RANDOM = new SecureRandom();
@Test
public void testConversion() {
// Scalar form
byte[] scalarA = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
System.out.printf("a: %s%n", HashCode.fromBytes(scalarA));
byte[] pointA = HashCode.fromString("5866666666666666666666666666666666666666666666666666666666666666".toLowerCase()).asBytes();
BigInteger expectedY = new BigInteger("46316835694926478169428394003475163141307993866256225615783033603165251855960");
PointAccum pointAccum = Qortal25519Extras.newPointAccum();
scalarMultBase(scalarA, pointAccum);
byte[] encoded = new byte[POINT_BYTES];
if (0 == encodePoint(pointAccum, encoded, 0))
fail("Point encoding failed");
System.out.printf("aG: %s%n", HashCode.fromBytes(encoded));
assertArrayEquals(pointA, encoded);
byte[] yBytes = new byte[POINT_BYTES];
System.arraycopy(encoded,0, yBytes, 0, encoded.length);
Bytes.reverse(yBytes);
System.out.printf("yBytes: %s%n", HashCode.fromBytes(yBytes));
BigInteger yBI = new BigInteger(yBytes);
System.out.printf("aG y: %s%n", yBI);
assertEquals(expectedY, yBI);
}
@Test
public void testAddition() {
/*
* 1G: b'5866666666666666666666666666666666666666666666666666666666666666'
* 2G: b'c9a3f86aae465f0e56513864510f3997561fa2c9e85ea21dc2292309f3cd6022'
* 3G: b'd4b4f5784868c3020403246717ec169ff79e26608ea126a1ab69ee77d1b16712'
*/
// Scalar form
byte[] s1 = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] s2 = HashCode.fromString("0200000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
// Point form
byte[] g1 = HashCode.fromString("5866666666666666666666666666666666666666666666666666666666666666".toLowerCase()).asBytes();
byte[] g2 = HashCode.fromString("c9a3f86aae465f0e56513864510f3997561fa2c9e85ea21dc2292309f3cd6022".toLowerCase()).asBytes();
byte[] g3 = HashCode.fromString("d4b4f5784868c3020403246717ec169ff79e26608ea126a1ab69ee77d1b16712".toLowerCase()).asBytes();
PointAccum p1 = Qortal25519Extras.newPointAccum();
scalarMultBase(s1, p1);
PointAccum p2 = Qortal25519Extras.newPointAccum();
scalarMultBase(s2, p2);
pointAdd(pointCopy(p1), p2);
byte[] encoded = new byte[POINT_BYTES];
if (0 == encodePoint(p2, encoded, 0))
fail("Point encoding failed");
System.out.printf("sum: %s%n", HashCode.fromBytes(encoded));
assertArrayEquals(g3, encoded);
}
@Test
public void testSimpleSign() {
byte[] privateKey = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] message = HashCode.fromString("01234567".toLowerCase()).asBytes();
byte[] signature = signForAggregation(privateKey, message);
System.out.printf("signature: %s%n", HashCode.fromBytes(signature));
}
@Test
public void testSimpleVerify() {
byte[] privateKey = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] message = HashCode.fromString("01234567".toLowerCase()).asBytes();
byte[] signature = HashCode.fromString("13e58e88f3df9e06637d2d5bbb814c028e3ba135494530b9d3b120bdb31168d62c70a37ae9cfba816fe6038ee1ce2fb521b95c4a91c7ff0bb1dd2e67733f2b0d".toLowerCase()).asBytes();
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
Qortal25519Extras.generatePublicKey(privateKey, 0, publicKey, 0);
assertTrue(verifyAggregated(publicKey, signature, message));
}
@Test
public void testSimpleSignAndVerify() {
byte[] privateKey = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] message = HashCode.fromString("01234567".toLowerCase()).asBytes();
byte[] signature = signForAggregation(privateKey, message);
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
Qortal25519Extras.generatePublicKey(privateKey, 0, publicKey, 0);
assertTrue(verifyAggregated(publicKey, signature, message));
}
@Test
public void testSimpleAggregate() {
List<OnlineAccountData> onlineAccounts = generateOnlineAccounts(1);
byte[] aggregatePublicKey = aggregatePublicKeys(onlineAccounts.stream().map(OnlineAccountData::getPublicKey).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate public key: %s%n", HashCode.fromBytes(aggregatePublicKey));
byte[] aggregateSignature = aggregateSignatures(onlineAccounts.stream().map(OnlineAccountData::getSignature).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate signature: %s%n", HashCode.fromBytes(aggregateSignature));
OnlineAccountData onlineAccount = onlineAccounts.get(0);
assertArrayEquals(String.format("expected: %s, actual: %s", HashCode.fromBytes(onlineAccount.getPublicKey()), HashCode.fromBytes(aggregatePublicKey)), onlineAccount.getPublicKey(), aggregatePublicKey);
assertArrayEquals(String.format("expected: %s, actual: %s", HashCode.fromBytes(onlineAccount.getSignature()), HashCode.fromBytes(aggregateSignature)), onlineAccount.getSignature(), aggregateSignature);
// This is the crucial test:
long timestamp = onlineAccount.getTimestamp();
byte[] timestampBytes = Longs.toByteArray(timestamp);
assertTrue(verifyAggregated(aggregatePublicKey, aggregateSignature, timestampBytes));
}
@Test
public void testMultipleAggregate() {
List<OnlineAccountData> onlineAccounts = generateOnlineAccounts(5000);
byte[] aggregatePublicKey = aggregatePublicKeys(onlineAccounts.stream().map(OnlineAccountData::getPublicKey).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate public key: %s%n", HashCode.fromBytes(aggregatePublicKey));
byte[] aggregateSignature = aggregateSignatures(onlineAccounts.stream().map(OnlineAccountData::getSignature).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate signature: %s%n", HashCode.fromBytes(aggregateSignature));
OnlineAccountData onlineAccount = onlineAccounts.get(0);
// This is the crucial test:
long timestamp = onlineAccount.getTimestamp();
byte[] timestampBytes = Longs.toByteArray(timestamp);
assertTrue(verifyAggregated(aggregatePublicKey, aggregateSignature, timestampBytes));
}
private List<OnlineAccountData> generateOnlineAccounts(int numAccounts) {
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
long timestamp = System.currentTimeMillis();
byte[] timestampBytes = Longs.toByteArray(timestamp);
for (int a = 0; a < numAccounts; ++a) {
byte[] privateKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
SECURE_RANDOM.nextBytes(privateKey);
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
Qortal25519Extras.generatePublicKey(privateKey, 0, publicKey, 0);
byte[] signature = signForAggregation(privateKey, timestampBytes);
onlineAccounts.add(new OnlineAccountData(timestamp, signature, publicKey));
}
return onlineAccounts;
}
}

View File

@@ -6,6 +6,7 @@ import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.utils.Base58;
public class RewardShareKeys {
@@ -28,7 +29,7 @@ public class RewardShareKeys {
PublicKeyAccount recipientAccount = new PublicKeyAccount(null, args.length > 1 ? Base58.decode(args[1]) : minterAccount.getPublicKey());
byte[] rewardSharePrivateKey = minterAccount.getRewardSharePrivateKey(recipientAccount.getPublicKey());
byte[] rewardSharePublicKey = PrivateKeyAccount.toPublicKey(rewardSharePrivateKey);
byte[] rewardSharePublicKey = Crypto.toPublicKey(rewardSharePrivateKey);
System.out.println(String.format("Minter account: %s", minterAccount.getAddress()));
System.out.println(String.format("Minter's public key: %s", Base58.encode(minterAccount.getPublicKey())));

View File

@@ -6,6 +6,7 @@ import java.util.HashMap;
import java.util.Map;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.PaymentTransactionData;
import org.qortal.data.transaction.RewardShareTransactionData;
@@ -40,12 +41,15 @@ public class AccountUtils {
public static TransactionData createRewardShare(Repository repository, String minter, String recipient, int sharePercent) throws DataException {
PrivateKeyAccount mintingAccount = Common.getTestAccount(repository, minter);
PrivateKeyAccount recipientAccount = Common.getTestAccount(repository, recipient);
return createRewardShare(repository, mintingAccount, recipientAccount, sharePercent);
}
public static TransactionData createRewardShare(Repository repository, PrivateKeyAccount mintingAccount, PrivateKeyAccount recipientAccount, int sharePercent) throws DataException {
byte[] reference = mintingAccount.getLastReference();
long timestamp = repository.getTransactionRepository().fromSignature(reference).getTimestamp() + 1;
byte[] rewardSharePrivateKey = mintingAccount.getRewardSharePrivateKey(recipientAccount.getPublicKey());
byte[] rewardSharePublicKey = PrivateKeyAccount.toPublicKey(rewardSharePrivateKey);
byte[] rewardSharePublicKey = Crypto.toPublicKey(rewardSharePrivateKey);
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, mintingAccount.getPublicKey(), fee, null);
TransactionData transactionData = new RewardShareTransactionData(baseTransactionData, recipientAccount.getAddress(), rewardSharePublicKey, sharePercent);
@@ -65,6 +69,15 @@ public class AccountUtils {
return rewardSharePrivateKey;
}
public static byte[] rewardShare(Repository repository, PrivateKeyAccount minterAccount, PrivateKeyAccount recipientAccount, int sharePercent) throws DataException {
TransactionData transactionData = createRewardShare(repository, minterAccount, recipientAccount, sharePercent);
TransactionUtils.signAndMint(repository, transactionData, minterAccount);
byte[] rewardSharePrivateKey = minterAccount.getRewardSharePrivateKey(recipientAccount.getPublicKey());
return rewardSharePrivateKey;
}
public static Map<String, Map<Long, Long>> getBalances(Repository repository, long... assetIds) throws DataException {
Map<String, Map<Long, Long>> balances = new HashMap<>();

View File

@@ -7,6 +7,7 @@ import java.math.BigDecimal;
import java.net.URL;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.SecureRandom;
import java.security.Security;
import java.util.ArrayList;
import java.util.Collections;
@@ -25,6 +26,7 @@ import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.block.BlockChain;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.asset.AssetData;
@@ -111,6 +113,12 @@ public class Common {
return testAccountsByName.values().stream().map(account -> new TestAccount(repository, account)).collect(Collectors.toList());
}
public static PrivateKeyAccount generateRandomSeedAccount(Repository repository) {
byte[] seed = new byte[32];
new SecureRandom().nextBytes(seed);
return new PrivateKeyAccount(repository, seed);
}
public static void useSettingsAndDb(String settingsFilename, boolean dbInMemory) throws DataException {
closeRepository();

View File

@@ -75,7 +75,7 @@ public class GetWalletTransactions {
System.out.println(String.format("Found %d transaction%s", transactions.size(), (transactions.size() != 1 ? "s" : "")));
for (SimpleTransaction transaction : transactions.stream().sorted(Comparator.comparingInt(SimpleTransaction::getTimestamp)).collect(Collectors.toList()))
for (SimpleTransaction transaction : transactions.stream().sorted(Comparator.comparingLong(SimpleTransaction::getTimestamp)).collect(Collectors.toList()))
System.out.println(String.format("%s", transaction));
}

View File

@@ -0,0 +1,63 @@
package org.qortal.test.minting;
import org.junit.Before;
import org.junit.Test;
import org.qortal.block.Block;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.transform.Transformer;
import org.qortal.utils.NTP;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class BlockTimestampTests extends Common {
private static class BlockTimestampDataPoint {
public byte[] minterPublicKey;
public int minterAccountLevel;
public long blockTimestamp;
}
private static final Random RANDOM = new Random();
@Before
public void beforeTest() throws DataException {
Common.useSettings("test-settings-v2-block-timestamps.json");
NTP.setFixedOffset(0L);
}
@Test
public void testTimestamps() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
Block parentBlock = BlockUtils.mintBlock(repository);
BlockData parentBlockData = parentBlock.getBlockData();
// Generate lots of test minters
List<BlockTimestampDataPoint> dataPoints = new ArrayList<>();
for (int i = 0; i < 20; i++) {
BlockTimestampDataPoint dataPoint = new BlockTimestampDataPoint();
dataPoint.minterPublicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
RANDOM.nextBytes(dataPoint.minterPublicKey);
dataPoint.minterAccountLevel = RANDOM.nextInt(5) + 5;
dataPoint.blockTimestamp = Block.calcTimestamp(parentBlockData, dataPoint.minterPublicKey, dataPoint.minterAccountLevel);
System.out.printf("[%d] level %d, blockTimestamp %d - parentTimestamp %d = %d%n",
i,
dataPoint.minterAccountLevel,
dataPoint.blockTimestamp,
parentBlockData.getTimestamp(),
dataPoint.blockTimestamp - parentBlockData.getTimestamp()
);
}
}
}
}

View File

@@ -177,4 +177,143 @@ public class RewardShareTests extends Common {
}
}
@Test
public void testCreateRewardSharesBeforeReduction() throws DataException {
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount dilbertAccount = Common.getTestAccount(repository, "dilbert");
// Create 6 reward shares
for (int i=0; i<6; i++) {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 7th reward share should fail because we've reached the limit (and we're not yet requiring a self share)
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateRewardSharesAfterReduction() throws DataException {
Common.useSettings("test-settings-v2-reward-shares.json");
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount dilbertAccount = Common.getTestAccount(repository, "dilbert");
// Create 2 reward shares
for (int i=0; i<2; i++) {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 3rd reward share should fail because we've reached the limit (and we haven't got a self share)
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateSelfAndRewardSharesAfterReduction() throws DataException {
Common.useSettings("test-settings-v2-reward-shares.json");
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount dilbertAccount = Common.getTestAccount(repository, "dilbert");
// Create 2 reward shares
for (int i=0; i<2; i++) {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 3rd reward share should fail because we've reached the limit (and we haven't got a self share)
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
// Now create a self share, which should succeed as we have space for it
AccountUtils.rewardShare(repository, dilbertAccount, dilbertAccount, sharePercent);
// 4th reward share should fail because we've reached the limit (including the self share)
assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateFounderRewardSharesBeforeReduction() throws DataException {
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount aliceFounderAccount = Common.getTestAccount(repository, "alice");
// Create 5 reward shares (not 6, because alice already starts with a self reward share in the genesis block)
for (int i=0; i<5; i++) {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 6th reward share should fail
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateFounderRewardSharesAfterReduction() throws DataException {
Common.useSettings("test-settings-v2-reward-shares.json");
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount aliceFounderAccount = Common.getTestAccount(repository, "alice");
// Create 5 reward shares (not 6, because alice already starts with a self reward share in the genesis block)
for (int i=0; i<5; i++) {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 6th reward share should fail
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
}

View File

@@ -4,6 +4,7 @@ import static org.junit.Assert.*;
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@@ -14,6 +15,7 @@ import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.block.BlockChain.RewardByHeight;
import org.qortal.controller.BlockMinter;
@@ -109,7 +111,7 @@ public class RewardTests extends Common {
public void testLegacyQoraReward() throws DataException {
Common.useSettings("test-settings-v2-qora-holder-extremes.json");
long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShare();
long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShareAtHeight(1);
BigInteger qoraHoldersShareBI = BigInteger.valueOf(qoraHoldersShare);
long qoraPerQort = BlockChain.getInstance().getQoraPerQortReward();
@@ -190,6 +192,47 @@ public class RewardTests extends Common {
}
}
@Test
public void testLegacyQoraRewardReduction() throws DataException {
Common.useSettings("test-settings-v2-qora-holder-extremes.json");
// Make sure that the QORA share reduces between blocks 4 and 5
assertTrue(BlockChain.getInstance().getQoraHoldersShareAtHeight(5) < BlockChain.getInstance().getQoraHoldersShareAtHeight(4));
// Keep track of balance deltas at each height
Map<Integer, Long> chloeQortBalanceDeltaAtEachHeight = new HashMap<>();
try (final Repository repository = RepositoryManager.getRepository()) {
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
long chloeLastQortBalance = initialBalances.get("chloe").get(Asset.QORT);
for (int i=2; i<=10; i++) {
Block block = BlockUtils.mintBlock(repository);
// Add to map of balance deltas at each height
long chloeNewQortBalance = AccountUtils.getBalance(repository, "chloe", Asset.QORT);
chloeQortBalanceDeltaAtEachHeight.put(block.getBlockData().getHeight(), chloeNewQortBalance - chloeLastQortBalance);
chloeLastQortBalance = chloeNewQortBalance;
}
// Ensure blocks 2-4 paid out the same rewards to Chloe
assertEquals(chloeQortBalanceDeltaAtEachHeight.get(2), chloeQortBalanceDeltaAtEachHeight.get(4));
// Ensure block 5 paid a lower reward
assertTrue(chloeQortBalanceDeltaAtEachHeight.get(5) < chloeQortBalanceDeltaAtEachHeight.get(4));
// Check that the reward was 20x lower
assertTrue(chloeQortBalanceDeltaAtEachHeight.get(5) == chloeQortBalanceDeltaAtEachHeight.get(4) / 20);
// Orphan to block 4 and ensure that Chloe's balance hasn't been incorrectly affected by the reward reduction
BlockUtils.orphanToBlock(repository, 4);
long expectedChloeQortBalance = initialBalances.get("chloe").get(Asset.QORT) + chloeQortBalanceDeltaAtEachHeight.get(2) +
chloeQortBalanceDeltaAtEachHeight.get(3) + chloeQortBalanceDeltaAtEachHeight.get(4);
assertEquals(expectedChloeQortBalance, AccountUtils.getBalance(repository, "chloe", Asset.QORT));
}
}
/** Use Alice-Chloe reward-share to bump Chloe from level 0 to level 1, then check orphaning works as expected. */
@Test
public void testLevel1() throws DataException {
@@ -295,7 +338,7 @@ public class RewardTests extends Common {
* So Dilbert should receive 100% - legacy QORA holder's share.
*/
final long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShare();
final long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShareAtHeight(1);
final long remainingShare = 1_00000000 - qoraHoldersShare;
long dilbertExpectedBalance = initialBalances.get("dilbert").get(Asset.QORT);

View File

@@ -0,0 +1,210 @@
package org.qortal.test.network;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.network.message.*;
import org.qortal.transform.Transformer;
import java.nio.ByteBuffer;
import java.security.Security;
import java.util.*;
import static org.junit.Assert.*;
public class OnlineAccountsV3Tests {
private static final Random RANDOM = new Random();
static {
// This must go before any calls to LogManager/Logger
System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");
Security.insertProviderAt(new BouncyCastleProvider(), 0);
Security.insertProviderAt(new BouncyCastleJsseProvider(), 1);
}
@Ignore("For informational use")
@Test
public void compareV2ToV3() throws MessageException {
List<OnlineAccountData> onlineAccounts = generateOnlineAccounts(false);
// How many of each timestamp and leading byte (of public key)
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte = convertToHashMaps(onlineAccounts);
byte[] v3DataBytes = new GetOnlineAccountsV3Message(hashesByTimestampThenByte).toBytes();
int v3ByteSize = v3DataBytes.length;
byte[] v2DataBytes = new GetOnlineAccountsV2Message(onlineAccounts).toBytes();
int v2ByteSize = v2DataBytes.length;
int numTimestamps = hashesByTimestampThenByte.size();
System.out.printf("For %d accounts split across %d timestamp%s: V2 size %d vs V3 size %d%n",
onlineAccounts.size(),
numTimestamps,
numTimestamps != 1 ? "s" : "",
v2ByteSize,
v3ByteSize
);
for (var outerMapEntry : hashesByTimestampThenByte.entrySet()) {
long timestamp = outerMapEntry.getKey();
var innerMap = outerMapEntry.getValue();
System.out.printf("For timestamp %d: %d / 256 slots used.%n",
timestamp,
innerMap.size()
);
}
}
private Map<Long, Map<Byte, byte[]>> convertToHashMaps(List<OnlineAccountData> onlineAccounts) {
// How many of each timestamp and leading byte (of public key)
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte = new HashMap<>();
for (OnlineAccountData onlineAccountData : onlineAccounts) {
Long timestamp = onlineAccountData.getTimestamp();
Byte leadingByte = onlineAccountData.getPublicKey()[0];
hashesByTimestampThenByte
.computeIfAbsent(timestamp, k -> new HashMap<>())
.compute(leadingByte, (k, v) -> OnlineAccountsManager.xorByteArrayInPlace(v, onlineAccountData.getPublicKey()));
}
return hashesByTimestampThenByte;
}
@Test
public void testOnGetOnlineAccountsV3() {
List<OnlineAccountData> ourOnlineAccounts = generateOnlineAccounts(false);
List<OnlineAccountData> peersOnlineAccounts = generateOnlineAccounts(false);
Map<Long, Map<Byte, byte[]>> ourConvertedHashes = convertToHashMaps(ourOnlineAccounts);
Map<Long, Map<Byte, byte[]>> peersConvertedHashes = convertToHashMaps(peersOnlineAccounts);
List<String> mockReply = new ArrayList<>();
// Warning: no double-checking/fetching - we must be ConcurrentMap compatible!
// So no contains()-then-get() or multiple get()s on the same key/map.
for (var ourOuterMapEntry : ourConvertedHashes.entrySet()) {
Long timestamp = ourOuterMapEntry.getKey();
var ourInnerMap = ourOuterMapEntry.getValue();
var peersInnerMap = peersConvertedHashes.get(timestamp);
if (peersInnerMap == null) {
// Peer doesn't have this timestamp, so if it's valid (i.e. not too old) then we'd have to send all of ours
for (Byte leadingByte : ourInnerMap.keySet())
mockReply.add(timestamp + ":" + leadingByte);
} else {
// We have entries for this timestamp so compare against peer's entries
for (var ourInnerMapEntry : ourInnerMap.entrySet()) {
Byte leadingByte = ourInnerMapEntry.getKey();
byte[] peersHash = peersInnerMap.get(leadingByte);
if (!Arrays.equals(ourInnerMapEntry.getValue(), peersHash)) {
// We don't match peer, or peer doesn't have - send all online accounts for this timestamp and leading byte
mockReply.add(timestamp + ":" + leadingByte);
}
}
}
}
int numOurTimestamps = ourConvertedHashes.size();
System.out.printf("We have %d accounts split across %d timestamp%s%n",
ourOnlineAccounts.size(),
numOurTimestamps,
numOurTimestamps != 1 ? "s" : ""
);
int numPeerTimestamps = peersConvertedHashes.size();
System.out.printf("Peer sent %d accounts split across %d timestamp%s%n",
peersOnlineAccounts.size(),
numPeerTimestamps,
numPeerTimestamps != 1 ? "s" : ""
);
System.out.printf("We need to send: %d%n%s%n", mockReply.size(), String.join(", ", mockReply));
}
@Test
public void testSerialization() throws MessageException {
List<OnlineAccountData> onlineAccountsOut = generateOnlineAccounts(true);
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteOut = convertToHashMaps(onlineAccountsOut);
validateSerialization(hashesByTimestampThenByteOut);
}
@Test
public void testEmptySerialization() throws MessageException {
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteOut = Collections.emptyMap();
validateSerialization(hashesByTimestampThenByteOut);
hashesByTimestampThenByteOut = new HashMap<>();
validateSerialization(hashesByTimestampThenByteOut);
}
private void validateSerialization(Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteOut) throws MessageException {
Message messageOut = new GetOnlineAccountsV3Message(hashesByTimestampThenByteOut);
byte[] messageBytes = messageOut.toBytes();
ByteBuffer byteBuffer = ByteBuffer.wrap(messageBytes).asReadOnlyBuffer();
GetOnlineAccountsV3Message messageIn = (GetOnlineAccountsV3Message) Message.fromByteBuffer(byteBuffer);
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteIn = messageIn.getHashesByTimestampThenByte();
Set<Long> timestampsIn = hashesByTimestampThenByteIn.keySet();
Set<Long> timestampsOut = hashesByTimestampThenByteOut.keySet();
assertEquals("timestamp count mismatch", timestampsOut.size(), timestampsIn.size());
assertTrue("timestamps mismatch", timestampsIn.containsAll(timestampsOut));
for (Long timestamp : timestampsIn) {
Map<Byte, byte[]> hashesByByteIn = hashesByTimestampThenByteIn.get(timestamp);
Map<Byte, byte[]> hashesByByteOut = hashesByTimestampThenByteOut.get(timestamp);
assertNotNull("timestamp entry missing", hashesByByteOut);
Set<Byte> leadingBytesIn = hashesByByteIn.keySet();
Set<Byte> leadingBytesOut = hashesByByteOut.keySet();
assertEquals("leading byte entry count mismatch", leadingBytesOut.size(), leadingBytesIn.size());
assertTrue("leading byte entry mismatch", leadingBytesIn.containsAll(leadingBytesOut));
for (Byte leadingByte : leadingBytesOut) {
byte[] bytesIn = hashesByByteIn.get(leadingByte);
byte[] bytesOut = hashesByByteOut.get(leadingByte);
assertTrue("pubkey hash mismatch", Arrays.equals(bytesOut, bytesIn));
}
}
}
private List<OnlineAccountData> generateOnlineAccounts(boolean withSignatures) {
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
int numTimestamps = RANDOM.nextInt(2) + 1; // 1 or 2
for (int t = 0; t < numTimestamps; ++t) {
long timestamp = 1 << 31 + (t + 1) << 12;
int numAccounts = RANDOM.nextInt(3000);
for (int a = 0; a < numAccounts; ++a) {
byte[] sig = null;
if (withSignatures) {
sig = new byte[Transformer.SIGNATURE_LENGTH];
RANDOM.nextBytes(sig);
}
byte[] pubkey = new byte[Transformer.PUBLIC_KEY_LENGTH];
RANDOM.nextBytes(pubkey);
onlineAccounts.add(new OnlineAccountData(timestamp, sig, pubkey));
}
}
return onlineAccounts;
}
}

View File

@@ -0,0 +1,90 @@
{
"isTestChain": true,
"blockTimestampMargin": 500,
"transactionExpiryPeriod": 86400000,
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.1",
"nameRegistrationUnitFees": [
{ "timestamp": 1645372800000, "fee": "5" }
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 },
{ "height": 2, "target": 70000, "deviation": 10000, "power": 0.8 }
],
"ciyamAtSettings": {
"feePerStep": "0.0001",
"maxStepsPerRound": 500,
"stepsPerFunctionCall": 10,
"minutesPerBlock": 1
},
"featureTriggers": {
"messageHeight": 0,
"atHeight": 0,
"assetsTimestamp": 0,
"votingTimestamp": 0,
"arbitraryTimestamp": 0,
"powfixTimestamp": 0,
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 9999999999999,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,
"timestamp": 0,
"transactions": [
{ "type": "ISSUE_ASSET", "assetName": "QORT", "description": "QORT native coin", "data": "", "quantity": 0, "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "assetName": "Legacy-QORA", "description": "Representative legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "ISSUE_ASSET", "assetName": "QORT-from-QORA", "description": "QORT gained from holding legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "GENESIS", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "amount": "1000000000" },
{ "type": "GENESIS", "recipient": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "amount": "1000000" },
{ "type": "CREATE_GROUP", "creatorPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupName": "dev-group", "description": "developer group", "isOpen": false, "approvalThreshold": "PCT100", "minimumBlockDelay": 0, "maximumBlockDelay": 1440 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "TEST", "description": "test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry", "assetName": "OTHER", "description": "other test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "GOLD", "description": "gold test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ACCOUNT_FLAGS", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "andMask": -1, "orMask": 1, "xorMask": 0 },
{ "type": "REWARD_SHARE", "minterPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "rewardSharePublicKey": "7PpfnvLSG7y4HPh8hE7KoqAjLCkv7Ui6xw4mKAkbZtox", "sharePercent": "100" },
{ "type": "ACCOUNT_LEVEL", "target": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "level": 5 }
]
}
}

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 0
"disableReferenceTimestamp": 0,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 5, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 20 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 6,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 20 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,94 @@
{
"isTestChain": true,
"blockTimestampMargin": 500,
"transactionExpiryPeriod": 86400000,
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.1",
"nameRegistrationUnitFees": [
{ "timestamp": 1645372800000, "fee": "5" }
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 1655460000000, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
],
"ciyamAtSettings": {
"feePerStep": "0.0001",
"maxStepsPerRound": 500,
"stepsPerFunctionCall": 10,
"minutesPerBlock": 1
},
"featureTriggers": {
"messageHeight": 0,
"atHeight": 0,
"assetsTimestamp": 0,
"votingTimestamp": 0,
"arbitraryTimestamp": 0,
"powfixTimestamp": 0,
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 0,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,
"timestamp": 0,
"transactions": [
{ "type": "ISSUE_ASSET", "assetName": "QORT", "description": "QORT native coin", "data": "", "quantity": 0, "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "assetName": "Legacy-QORA", "description": "Representative legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "ISSUE_ASSET", "assetName": "QORT-from-QORA", "description": "QORT gained from holding legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "GENESIS", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "amount": "1000000000" },
{ "type": "GENESIS", "recipient": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "amount": "1000000" },
{ "type": "CREATE_GROUP", "creatorPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupName": "dev-group", "description": "developer group", "isOpen": false, "approvalThreshold": "PCT100", "minimumBlockDelay": 0, "maximumBlockDelay": 1440 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "TEST", "description": "test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry", "assetName": "OTHER", "description": "other test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "GOLD", "description": "gold test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ACCOUNT_FLAGS", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "andMask": -1, "orMask": 1, "xorMask": 0 },
{ "type": "REWARD_SHARE", "minterPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "rewardSharePublicKey": "7PpfnvLSG7y4HPh8hE7KoqAjLCkv7Ui6xw4mKAkbZtox", "sharePercent": "100" },
{ "type": "ACCOUNT_LEVEL", "target": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "level": 5 }
]
}
}

View File

@@ -10,7 +10,11 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
@@ -26,7 +30,10 @@
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraHoldersShareByHeight": [
{ "height": 1, "share": 0.20 },
{ "height": 1000000, "share": 0.01 }
],
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
@@ -51,11 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,19 @@
{
"repositoryPath": "testdb",
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-block-timestamps.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 100,
"bootstrapFilenamePrefix": "test-",
"dataPath": "data-test",
"tempDataPath": "data-test/_temp",
"listsPath": "lists-test",
"storagePolicy": "FOLLOWED_OR_VIEWED",
"maxStorageCapacity": 104857600
}

View File

@@ -0,0 +1,19 @@
{
"repositoryPath": "testdb",
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-reward-shares.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 100,
"bootstrapFilenamePrefix": "test-",
"dataPath": "data-test",
"tempDataPath": "data-test/_temp",
"listsPath": "lists-test",
"storagePolicy": "FOLLOWED_OR_VIEWED",
"maxStorageCapacity": 104857600
}