Added real address to API results - Currently the address shown in the API results when querying blocks, shows an address formed by the 'reward share public key', this address is not useful for viewing, as it is not the address utilized for QORT. This change makes it so the 'real' Qortal address is displayed instead of this useless address. Thanks @AlphaX-Qortal
Added group member check to validations - validation fixes.
Network changes - Moved unnecessary 'we already have connection' messages from info logging to debug. Updated minPeerVersion default to current release version. (4.6.5). Updated default peer list. Updated syntax. Updated formatting.
Updated dependencies
Thanks @AlphaX-Qortal
Restructuring database connections for better garbage collection - resolves long-standing memory leak in multiple places that was discovered more specifically after the thread crashes were made to restart if crashed. Thanks so much to @kennycud for this improvement!
Minter Group Check Optimizations - Have been tested by 50+ nodes for multiple days. The only thing we have to verify prior to merging the upcoming changes from Alpha, is validate the additional boolean passed in to canMint on line 1521 in current block.java (isMinterValid)
Default thread limits per message type can be specified in Settings.setAdditionalDefaults(), e.g
maxThreadsPerMessageType.add(new ThreadLimit("GET_ARBITRARY_DATA_FILE", 5));
These can also be overridden on a per-node basis in settings.json, e.g
"maxThreadsPerMessageType": [
{ "messageType": "GET_ARBITRARY_DATA_FILE", "limit": 3 },
{ "messageType": "GET_ARBITRARY_DATA_FILE_LIST", "limit": 3 }
]
settings.json values take priority, but any message types that aren't specified in settings.json will still be included from the Settings.java defaults. This allows single message types to be overridden in settings.json without removing the limits for all of the other message types.
Any messages that arrive are discarded if the node is already at the thread limit for that message type.
Warnings are now shown in the logs if the total number of active threads reaches 90% of the allocated thread pool size. Additionally, it can warn per message type by specifying a per-message-type warning threshold in settings.json, e.g
"threadCountPerMessageTypeWarningThreshold": 20
The above setting would warn in the logs if a single message type was consuming more than 20 threads at once, therefore making it a candidate to be limited in maxThreadsPerMessageType.
Initial values of maxThreadsPerMessageType are guesses and may need modifying based on real world results. Limiting threads may impact functionality, so this should be carefully tested.
Also be aware that the thread tracking may reduce network performance slightly, so be sure to test thoroughly on slower hardware.
There are 3 values that need to be specified for this feature trigger:
- blockRewardBatchSize: the number of blocks per batch (1000 recommended).
- blockRewardBatchStartHeight: the block height at which the system switches to batch reward distributions. Must be a multiple of blockRewardBatchSize. Ideally this would be set to a block height that will occur at least 1-2 weeks after the release / auto update goes live.
- blockRewardBatchAccountsBlockCount: the number of blocks to include online accounts in the lead up to the batch reward distribution block (25 recommended).
Once active, rewards are no longer distributed every block and instead are distributed every 1000th block. This includes all fees from the distribution block and the previous 999 blocks.
The online accounts used for the distribution are taken from one of the previous 25 blocks (e.g. blocks xxxx975-xxxx999). The validation ensures that it is always the block with the highest number of online accounts in this range. If this number of online accounts is shared by multiple blocks, it will pick the one with the lowest height.
The idea behind 25 blocks is that it's low enough to reduce load in the other 975 blocks, but high enough for it to be extremely difficult for block signers to influence reward payouts.
Batch distribution blocks contain a copy of the online accounts from one of these 25 preceding blocks. However, online account signatures and online accounts timestamp are excluded to save space. The core will validate that the copy of online accounts is exactly matching those of the earlier validated block, and are therefore valid.
Fairly comprehensive unit tests have been written but this needs a lot more testing on testnets before it can be considered stable.
Future note: Once released, the online accounts mempow can optionally be scaled back so that it is no longer running 24/7, and instead is only running for 2-3 hours each day in the lead up to the batch distribution block. In each 1000 block cycle, the mempow would ideally start at least an hour before block xxxx975 is due to be minted, and continue for an hour after block xxxx000 (the reward distribution block). None of the ideas mentioned in this last paragraph are coded yet.
- Add async responder thread from @catbref which was previously only in place for LTC
- Log when computing mempow nonces
- Skip transaction import if signature is invalid
- Added checks to a message test to mimic trade bot transaction lookups
- PUBLICIZE transactions are no longer possible.
- ARBITRARY transactions are now only possible using a fee.
- MESSAGE transactions only confirm when they are being sent to an AT. Messages to regular addresses (or no recipient) will expire after 24 hours.
- Difficulty for confirmed MESSAGE transactions increases from 14 to 16.
- Difficulty for unconfirmed MESSAGE transactions decreases from 14 to 12.
This is a short term limit, is well above current usage levels, and can be increased substantially in future once the block minter code has been improved.
This allows Q-Apps and websites to be developed and tested in real time, by proxying an existing webserver such as localhost:5173 from React/Vite. The proxy adds all QDN functionality to the existing server in real time. Needs UI integration before all features can be used.
Using a separate database method for now to reduce risk of interfering with other parts of the code which use it. It can be combined later when there is more testing time.
It doesn't quite work as intended, so it's best that it doesn't interfere right away. 24 hours should be long enough for any issues to be manually resolved.
The dedicated cache manager thread is now used for metadata updates only. If metadata ever goes missing from the db, it would be straightforward to have a background thread that corrects any discrepancies between the filesystem and the db. Not adding that until it is needed.
Offers with more than 3 failures will be hidden from the API and websocket, to prevent unbuyable offers from staying in the order books and continuously failing. maxTradeOfferAttempts can be optionally increased on a node to show more trades that would otherwise be hidden.
By default, only the latest resource is returned for a name/service combination. All identifiers can be optionally returned by setting `mode` to "ALL".
More search modes can be added in the future, for instance "RELEVANT" or "POPULAR" (these are just ideas, and are not currently supported).
"query" searches name, identifier, title and description fields
"title" searches title only
"description" searches description only
All support "&prefix=true", to indicate searching by prefix only.
These can ultimately be used to help inform the cleanup manager on the best order to delete files when the node runs out of space. Public data should be given priority over private data (unless the node is part of a data market contract for that data - this isn't developed yet).
Allows for video seeking when using URL playback, even though the Range header isn't implemented yet. This could be heavily optimized by adding full support of the Range/Content-Range headers, however this is still a big step forward as it allows for (inefficient) seeking.
Should fix problems on systems unable to use IPv6 wildcard (::) for listening, and avoids having to manually specify "bindAddress": "0.0.0.0" in settings.json.
Any list with the following prefix will be used in block/follow logic:
blockedNames
blockedAddresses
followedNames
For instance, any names in a list named "blockedNames_CustomBlockList" would also be blocked, along with those in the standard "blockedNames" list.
This will ultimately allow apps to offer custom block/follow lists to users (once list functionality is added to the Q-Apps API).
Unhandled requests (where no file exists) are now forwarded to the index file, to allow for custom routing in the app. This applies to the APP service only. For the WEBSITE and other services, unhandled requests will return a 404. In future, we may be able to allow websites to opt in to URL routing too, and maybe even allow both services to specify custom routing rules in a file.
This causes the data to be stored with the requested filename, instead of generating a random one. Also, randomly generated filenames now use a timestamp instead of a random number.
- "identifier" is an alternative to "query" that will search identifiers only.
- "name" is an alternative to "query" that will search names only.
- "query" remains the same as before - it searches both name and identifier fields.
- "prefix" is a boolean, and when true it will only match the beginning of each field. Works with "identifier", "name", and "query" params.
Usage:
Add `encoding=BASE64` query string parameter to opt in to base64 encoding of returned chat data. Defaults to BASE58 for backwards support.
Compatible endpoints:
GET /chat/messages
GET /chat/message/{signature}
GET /chat/active/{address}
GET /websockets/chat/active/*
GET /websockets/chat/messages
This is required as it's the only place that holds the original order of each block's transactions. We cannot sort them, because the comparator function for transactions has some dependencies on the existing order for AT transactions. As a result, topOnly nodes cannot perform this reshape, and will be unable to run this version.
All /arbitrary endpoints responsible for publishing data now support an optional "preview" query string parameter. If true, these endpoints will return a URL path to open the preview, rather than returning transaction bytes.
ArbitraryDataFile now has a fileCount() method which returns the total number of files associated with that piece of data - i.e. chunks, metadata, and the complete file in cases where it isn't chunked.
When "archiveVersion" is set to 2 in settings, this should allow the archive size to reduce by over 90%. Some nodes might want to maintain an older/larger version, for the purposes of development/debugging, so this is currently opt-in.
Default is version "1". If version "2" is specified, the API will return the full transaction JSON on success, rather than just "true".
Example usage:
curl -X POST "http://localhost:12391/transactions/process" -H "X-API-VERSION: 2" -d "signedTransactionBytesHere"
The former is available for all blocks, whereas the latter is only available for unpruned blocks.
Also removed joins with the Blocks table - as the Blocks table is also pruned - and instead retrieved the height from the Transactions table.
This should fix an edge case where AT states data was pruned/trimmed but it was then later required in consensus. The older state was deleted because it was replaced by a new "latest" state in a brand new block. But once the new "latest" state was orphaned from the block, the old "latest" state was then required again.
This works around the problem by excluding very recent blocks in the latest AT states data, so that it is unaffected by real-time sync activity.
The trade off is that we could end up retaining more AT states than needed, so a secondary cleanup process may need to run at some time in the future to remove these. But it should only be a minimal amount of data, and can be cleaned up with a single query. This would have been happening to a certain degree already.
# Conflicts:
# src/main/java/org/qortal/controller/repository/AtStatesPruner.java
# src/main/java/org/qortal/controller/repository/AtStatesTrimmer.java
Examples:
### Get URL to load a QDN resource
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "THUMBNAIL",
name: "QortalDemo",
identifier: "qortal_avatar"
// path: "filename.jpg" // optional - not needed if resource contains only one file
});
```
### Get URL to load a QDN website
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "QortalDemo",
});
```
### Get URL to load a specific file from a QDN website
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "AlphaX",
path: "/assets/img/logo.png"
});
```
The URL used to access the gateway is now interpreted, and the most appropriate resource is served. This means it can be used in different ways to retrieve any type of content from QDN. For example:
/QortalDemo
/QortalDemo/minting-leveling/index.html
/WEBSITE/QortalDemo
/WEBSITE/QortalDemo/minting-leveling/index.html
/APP/QortalDemo
/THUMBNAIL/QortalDemo/qortal_avatar
/QCHAT_IMAGE/birtydasterd/qchat_BfBeCz
/ARBITRARY_DATA/PirateChainWallet/LiteWalletJNI/coinparams.json
This allows any QDN resource (e.g. an IMAGE) to be linked to from a website/app and then rendered on screen. It isn't yet supported in gateway or domain map mode, as these need some more thought.
The simplest way to link to another QDN website is to include a link with the format:
<a href="qortal://WEBSITE/QortalDemo">link text</a>
This can be expanded to link to a specific path, e.g:
<a href="qortal://WEBSITE/QortalDemo/minting-leveling/index.html">link text</a>
Or it can be initiated programatically, via qortalRequest():
let res = await qortalRequest({
action: "LINK_TO_QDN_RESOURCE",
service: "WEBSITE",
name: "QortalDemo",
path: "/minting-leveling/index.html" // Optional
});
Note that qortal:// links don't yet support identifiers, so the above format is not confirmed.
This prevents foreign coins from leaving the local wallet when there is a high probability that the trade will fail, and therefore should reduce the chances of losing transaction fees due to refunds.
Whenever this occurs, the UI will show "Trade has an existing buy request or is pending cancellation." after clicking Buy.
- All APIs are now served over the gateway and domain map, with the exception of /admin/*
- AdminResource moved to a "restricted" folder, so that it isn't served over the gateway/domainMap ports.
- This opens the door to websites/apps calling core APIs directly for certain read-only functions, as an alternative to using qortalRequest().
Requests are now internally routed to the existing API handlers. This should allow new Q-Apps API endpoints to be added much more quickly, as well as removing the need to maintain their code separately from the regular API endpoints.
This should fix an edge case where AT states data was pruned/trimmed but it was then later required in consensus. The older state was deleted because it was replaced by a new "latest" state in a brand new block. But once the new "latest" state was orphaned from the block, the old "latest" state was then required again.
This works around the problem by excluding very recent blocks in the latest AT states data, so that it is unaffected by real-time sync activity.
The trade off is that we could end up retaining more AT states than needed, so a secondary cleanup process may need to run at some time in the future to remove these. But it should only be a minimal amount of data, and can be cleaned up with a single query. This would have been happening to a certain degree already.
isTooDivergent will be true or false if a definitive decision has been made, or missing from the response if not yet known. Therefore it should be safe to treat `"isTooDivergent": false` as a peer that is on the same chain.
Currently enabled for topOnly nodes only. This will detect if the node is on a divergent chain, and will force a bootstrap or resync (depending on settings) in order to rejoin the main chain.
This avoids the wasted time and consensus confusion causes by syncing and then validation failing. This is significant after the algo has run, as many signers will become invalid.
The validation is currently set to a feature trigger of height 0, although this will likely be set to a future block, in case there are any cases in the chain's history where this validation may fail (e.g. transfer privs?)
When users buy QORT ("Alice"-side), most of the API time is spent computing mempow for the MESSAGE sent to Bob's AT.
This is the final stage startResponse() and after Alice's P2SH is already broadcast.
To speed this up, the MESSAGE part is moved into its own thread allowing startResponse() to return sooner, improving the user experience.
Caveats:
If MESSAGE importAsUnconfirmed() somehow fails the the buy won't complete and Alice will have to wait for P2SH refund.
If Alice shuts down her node while MESSAGE mempow is being computed then it's possible the shutdown will be blocked until mempow is complete.
Currently only implemented in LitecoinACCTv3TradeBot as this is only proof-of-concept.
Tested with multiple buys in the same block.
This is a hopeful fix for extra memory usage since mempow activated, due to adding a lot of load to the garbage collector. It only applies to accounts verified from the import queue; the optimization hasn't been applied to block processing. But verifying online accounts when processing blocks is rare and generally would only last a short amount of time.
This allows testnets to more easily coexist on the same machines that are running a mainnet instance, and still tests the mempow computation and verification in a non-resource-intensive way.
We no longer need all the code complexity, now that 24 hours have passed since activation. We don't validate online accounts beyond 12 hours, and the data is trimmed after 24 hours.
- Added "singleNodeTestnet" setting, allowing for fast and consecutive block minting, and no requirement for a minimum number of peers.
- Added "recoveryModeTimeout" setting (previously hardcoded in Synchronizer).
- Updated testnets documentation to include new settings and a quick start guide.
- Added "generic" minting account that can be used in testnets (not functional on mainnet), to simplify the process for new devs.
We expect a "Couldn't build a to-be-minted block" log on every startup due to trying to mint before having any accounts. This one has moved from error to info level because error logs can be quite intrusive when using an IDE.
This allows one message to reference another, e.g. for replies, edits, and reactions. We can't use the existing reference field as this is used for encryption and generally points to the user's lastReference at the time of signing.
"chatReference" is based on the "nameReference" field used in various name transactions, for similar purposes.
This needs a feature trigger timestamp to activate, and that same timestamp will need to be used in the UI since that is responsible for building the chat transactions.
The BLOCK_SUMMARIES message type would differentiate between an empty response and a missing/invalid response. However, in V2, a response with empty summaries would throw a BufferUnderflowException and be treated by the caller as a null message.
This caused problems when trying to find a common block with peers that have diverged by more than 8 blocks. With V1 the caller would know to search back further (e.g. 16 blocks) but in V2 it was treated as "no response" and so the caller would give up instead of increasing the look-back threshold.
This fix will identify BLOCK_SUMMARIES_V2 messages with no content, and return an empty array of block summaries instead of a null message.
Should be enough to recover any stuck nodes, as long as they haven't diverged more than 240 blocks from the main chain.
This should hopefully fix a potential issue where peer's chain tip data becomes contaminated with other summary data, causing incorrect sync decisions.
The `start.sh` & `stop.sh` scripts have already been marked as executables in the source folder... But since we have only piped their contents, we need to set correct file permissions again.
This allows the first pass to always be served from the peer's cache of 8 summaries. This allows a maximum of 7 to be returned, because the 8th spot is needed for the parent block's signature.
Loading from the cache should speed up sync decisions, particularly when choose which peer to sync from. The greater the number of connected peers, the more significant this optimization will be. It should also reduce wasted network requests and data usage.
Adding this check prior to making a network request is a simple way to introduce the new cached summaries from BLOCK_SUMMARIES_V2 without having to rewrite a lot of the complex sync / peer comparison logic. Longer term we may want to rewrite that logic to read from the cache directly, but it doesn't make sense to introduce that level of risk at this point time, especially as the Synchronizer may be rewritten soon to prefer longer chains.
Even so, this is still quite a high risk commit so lots of testing will be needed.
The main difference here is that we now remove items from the onlineAccountsImportQueue in a batch, _after_ they have been imported. This prevents duplicates from being added to the queue in the previous time gap between them being removed and imported.
This caused large gaps with no presence data. They are removed when they expire, causing the local count to drop to zero, and the node would only start requesting them again once a peer had pushed one or more entries proactively.
Touches quite a few files because:
* Deprecate HEIGHT_V2 because it doesn't contain enough info to be fully useful during sync.
Newer peers will re-use BLOCK_SUMMARIES_V2.
* For newer peers, instead of sending / broadcasting HEIGHT_V2,
send top N block summaries instead, to avoid requests for minor reorgs.
* When responding to GET_BLOCK, and we don't actually have the requested block,
we currently send an empty BLOCK_SUMMARIES message instead of not responding,
which would cause a slow timeout in Synchronizer.
This pattern has spread to other network message response code,
so now we introduce a generic 'unknown' message type for all these cases.
* Remove PeerChainTipData class entirely and re-use BlockSummaryData instead.
* Each Peer instance used to hold PeerChainTipData - essentially single latest block summary - but now holds a List of latest block summaries.
* PeerChainTipData getter/setter methods modified for compatibility at this point in time.
* Repository methods that return BlockSummaryData (or lists of) now try to fully populate them,
including newly added block reference field.
* Re-worked Peer.canUseCommonBlockData() to be more readable
* Cherry-picked patch to Message.fromByteBuffer() to pass an empty, read-only ByteBuffer to subclass fromByteBuffer() methods, instead of null.
This allows natural use of BufferUnderflowException if a subclass tries to use read(), or hasRemaining(), etc. from an empty data-payload message.
Previously this could have caused an NPE.
It will now request online accounts every 1 minute instead of every 5 seconds, except for the first 5 minutes following a new online accounts timestamp, in which it will request every 5 seconds (referred to as the "burst" interval). It will also use the burst interval for the first 5 minutes after the node starts.
This is based on the idea that most online accounts arrive soon after a new timestamp begins, and so there is no need to request accounts so frequently after that. This should reduce data usage by a significant amount.
Once mempow is fully rolled out, the "burst" feature can be reduced or removed, since online accounts will be sent ahead of time, generally 15-30 mins prior to the new online accounts timestamp becoming active.
This closes a gap where accounts would be moved from onlineAccountsImportQueue to onlineAccountsToAdd, but not yet imported. During this time, there was nothing to stop them from being added to the import queue again, causing duplicate validations.
For regular groups, we require that the owner adds/removes the admins, so group membership is adequately checked. However for null-owned groups this check is skipped. So we need an additional condition to prevent non-group members from issuing a transaction for approval by the group admins.
* The dev group (ID 1) is owned by the null account with public key 11111111111111111111111111111111
* To regain access to otherwise blocked owner-based rules, it has different validation logic
* which applies to groups with this same null owner.
*
* The main difference is that approval is required for certain transaction types relating to
* null-owned groups. This allows existing admins to approve updates to the group (using group's
* approval threshold) instead of these actions being performed by the owner.
*
* Since these apply to all null-owned groups, this allows anyone to update their group to
* the null owner if they want to take advantage of this decentralized approval system.
*
* Currently, the affected transaction types are:
* - AddGroupAdminTransaction
* - RemoveGroupAdminTransaction
*
* This same approach could ultimately be applied to other group transactions too.
Using the feature trigger timestamp here should be much less error prone than a whole new block message version. Once mempow has been live for at least 24 hours, the feature trigger can be removed and the code cleaned up, as all online accounts signatures will use the new format from that time onwards (legacy signatures are trimmed after 24 hours).
Due to the large amount of data, it can take some time for the request to be processed and data to be transferred. It may make more sense to load the initial state from the standard API, and just use the websockets for updates.
This should fix 500 error which could prevent forced refund attempts for LTC and other chains. It wouldn't have affected any normal trade-bot refund functionality.
This allows for different share bin distribution starting at an undecided future block height. This height will correspond with the QORA reduction. New values decided in recent community vote.
This allows the QORA share percentage to be modified at different heights, based on community votes. Added unit test to simulate a reduction.
# Conflicts:
# src/test/java/org/qortal/test/minting/RewardTests.java
This rewrite may have been causing problems with connections in the network, due to peers being forgotten too easily. Reverting for now to see if it solves the problem.
This reverts commit d81071f2544ec72807f70892388ae2ddec8dbc8c.
# Conflicts:
# src/main/java/org/qortal/network/Network.java
If this proves to not have any significant bad effects on re-orgs, we could consider setting these even higher or even disabling the auto disconnect by default.
This allows users to increase their default birthday if they know that no wallets were created before a certain block, to reduce sync time. It also fixed some failed unit tests that relied on transactions between blocks 1900000 and 2000000.
Using a hardcoded signature ensures that the libraries cannot be swapped out without a core auto update, which requires the standard dev team approval process.
This replaces the previously hardcoded "numberOfAdditionalBatchesToSearch" variable, and specifies the minimum number of empty consecutive addresses required before a set of wallet transactions is considered complete. Used for foreign transaction lists and balances.
This is a simple way to discard the 5-minute online account timestamps (from out of date nodes) once the switch to 30-minute online account timestamps has taken place.
Online account nonces are appended to the onlineAccountsSignatures to avoid the need for a new field, but it probably makes more sense to separate them.
Although BlockMinter could reattach a repository session to its cache of potential blocks,
and these blocks would in turn reattach that repository session to their transactions,
further transaction-specific fields (e.g. creator PublicKeyAccount) were not being updated.
This would lead to NPEs like the following:
Exception in thread "BlockMinter" java.lang.NullPointerException
at org.qortal.repository.hsqldb.HSQLDBRepository.cachePreparedStatement(HSQLDBRepository.java:587)
at org.qortal.repository.hsqldb.HSQLDBRepository.prepareStatement(HSQLDBRepository.java:569)
at org.qortal.repository.hsqldb.HSQLDBRepository.checkedExecute(HSQLDBRepository.java:609)
at org.qortal.repository.hsqldb.HSQLDBAccountRepository.getBalance(HSQLDBAccountRepository.java:327)
at org.qortal.account.Account.getConfirmedBalance(Account.java:72)
at org.qortal.transaction.MessageTransaction.isValid(MessageTransaction.java:200)
at org.qortal.block.Block.areTransactionsValid(Block.java:1190)
at org.qortal.block.Block.isValid(Block.java:1137)
at org.qortal.controller.BlockMinter.run(BlockMinter.java:301)
where the Account has an associated repository session which is now obsolete.
This commit reverts BlockMinter back to obtaining a repository session before entering main loop.
To prevent a single or very small number of minters receiving the rewards for an entire tier, share bins can now require "activation". This adds the requirement that a minimum number of accounts must be present in a share bin before it is considered active. When inactive, the rewards and minters are added to the previous tier.
Summary of new functionality:
- If a share bin has more than one, but less than 30 accounts present, the rewards and accounts are shifted to the previous share bin.
- This process is iterative, so the accounts can shift through multiple tiers until the minimum number of accounts is met, OR the share bin's starting level is less than shareBinActivationMinLevel.
- Applies to level 7+, so that no backwards support is needed. It will only take effect once the first account reaches level 7.
This requires hot swapping the sharesByLevel data to combine tiers where needed, so is a considerable shift away from the immutable array that was in place previously.
All existing and new unit tests are now passing, however a lot more testing will be needed.
- This adds mempow requirements to online account importing (after activation timestamp), however doesn't yet add any requirements to block validation.
- It also causes the 'next' online accounts timestamp to be computed in addition to the 'current', so that the computed nonce value is ready when the next online accounts timestamp window begins.
Online account credit is a more useful definition of "minting" than block signing, from the user's perspective. Should bring UI minting/syncing status in line with the core's systray status.
- Reduce concurrent reward share limit from 6 to 3 (or from 5 to 2 when including self share) - as per community vote.
- Founders remain at 6 (5 when including self share) - also decided in community vote.
- When all slots are being filled, require that at least one is a self share, so that not all can be used for sponsorship.
- Activates at future undecided timestamp.
We already mark peers as misbehaved if they returned invalid signatures, but this wasn't sufficient when multiple copies of the same invalid block exist on the network (e.g. after a hard fork). In these cases, we need to be more proactive to avoid syncing with these peers, to increase the chances of preserving other candidate blocks.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.
On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.
On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
- Show "Minting" as long as online accounts are submitted to the network (previously it related to block signing).
- Fixed bug causing it to regularly show "Synchronizing 100%".
- Only show "Synchronizing" if the chain falls more than 2 hours behind - anything less is unnecessary noise.
Symptoms are:
* AutoUpdate trying to run new ApplyUpdate process, but nothing appears in log-apply-update.?.txt
* Main qortal.jar process continues to run without updating
* Last AutoUpdate line in log.txt.? is:
2022-06-18 15:42:46 INFO AutoUpdate:258 - Applying update with: /usr/local/openjdk11/bin/java -Djava.net.preferIPv4Stack=false -Xss256k -Xmx1024m -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=127.0.0.1:5005 -cp new-qortal.jar org.qortal.ApplyUpdate
Changes are:
* child process now inherits parent's stdout / stderr (was piped from parent)
* child process is given a fresh stdin, which is immediately closed
* AutoUpdate now converts -agentlib JVM arg to -DQORTAL_agentlib
* ApplyUpdate converts -DQORTAL_agentlib to -agentlib
The latter two changes are to prevent a conflict where two processes try to reuse the same JVM debugging port number.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.
Set min peer version to 3.3.203 in BlockV2Message class.
Bump v3 min peer version from 3.2.203 to 3.3.203
No need for toOnlineAccountTimestamp(long) as we only ever use getCurrentOnlineAccountTimestamp().
Latter now returns Long and does the call to NTP.getTime() on behalf of caller, removing duplicated NTP.getTime() calls and null checks in multiple callers.
Add aggregate-signature feature-trigger timestamp threshold checks where needed, near sign() and verify() calls.
Improve logging - but some logging will need to be removed / reduced before merging.
Aggregated signature should reduce block payload significantly,
as well as associated network, memory & CPU loads.
org.qortal.crypto.BouncyCastle25519 renamed to Qortal25519Extras.
Our class provides additional features such as DH-based shared secret,
aggregating public keys & signatures and sign/verify for aggregate use.
BouncyCastle's Ed25519 class copied in as BouncyCastleEd25519,
but with 'private' modifiers changed to 'protected',
to allow extension by our Qortal25519Extras class,
and to avoid lots of messy reflection-based calls.
Slight optimization to BlockMinter by adding OnlineAccountsManager.hasOnlineAccounts():boolean instead of returning actual data, only to call isEmpty()!
Move online account cache code from Block into OnlineAccountsManager, simplifying Block code and removing duplicated caches from Block also.
This tidies up those remaining set-based getters in OnlineAccountsManager.
No need for currentOnlineAccountsHashes's inner Map to be sorted so addAccounts() creates new ConcurentHashMap insteaad of ConcurrentSkipListMap.
Changed GetOnlineAccountsV3Message to use a single byte for count of hashes as it can only be 1 to 256.
256 is represented by 0.
Comments tidy-up.
Change v3 broadcast interval from 10s to 15s.
Adding support for GET_ONLINE_ACCOUNTS_V3 to Controller, which calls OnlineAccountsManager.
With OnlineAccountsV3, instead of nodes sending their list of known online accounts (public keys),
nodes now send a summary which contains hashes of known online accounts, one per timestamp + leading-byte combo.
Thus outgoing messages are much smaller and scale better with more users.
Remote peers compare the hashes and send back lists of online accounts (for that timestamp + leading-byte combo) where hashes do not match.
Massive rewrite of OnlineAccountsManager to maintain online accounts.
Now there are three caches:
1. all online accounts, but split into sets by timestamp
2. 'hashes' of all online accounts, one hash per timestamp+leading-byte combination
Mainly for efficient use by GetOnlineAccountsV3 message constructor.
3. online accounts for the highest blocks on our chain to speed up block processing
Note that highest blocks might be way older than 'current' blocks if we're somewhat behind in syncing.
Other OnlineAccountsManager changes:
* Use scheduling executor service to manage subtasks
* Switch from 'synchronized' to 'concurrent' collections
* Generally switch from Lists to Sets - requires improved OnlineAccountData.hashCode() - further work needed
* Only send V3 messages to peers with version >= 3.2.203 (for testing)
* More info on which online accounts lists are returned depending on use-cases
To test, change your peer's version (in pom.xml?) to v3.2.203.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.
Set min peer version to 3.3.203 in BlockV2Message class.
A full sync is unavoidable for P2SH redeem/refund, so we need to be able to save our progress. Creating a new null seed wallet each time isn't an option because it relies on having a recent checkpoint to avoid having to sync large amounts of blocks every time (sync is per wallet, not per node).
This allows for compatibility with TRANSFER_PRIVS validation in commit 8950bb7, which treats any account with a non-null reference as "existing". It also avoids possible unknown side effects from trying to process and store transactions with a null reference - something that wouldn't have been possible until the validation was removed.
This should prevent the failed transactions that are encountered when issuing two or more in a short space of time. Using a feature trigger (hard fork) to release this, to avoid potential consensus confusion around the time of the update (older versions could consider the main chain invalid until updating).
This will hopefully reduce the number of failed tradeoffer listings that result in a nonfunctional tradebot (and subsequent PENDING status shown in the UI)
This is needed to allow redeem/refund of P2SH without having an actively synced and initialized wallet. It also ultimately avoids us having to retain the wallet entropy in the trade bot states. Various safety checks have been introduced to make sure that a disposable wallet is never used for anything other than P2SH redeem/refund.
This has been modified to a) use full public keys instead of PKH, and b) hand off all transaction building, signing, and broadcasting to the (heavily customized) Pirate light wallet library.
Currently, new transactions take a very long time to be included in each block (or reach the intended recipient), because each node has to obtain a repository lock and import the transaction before it notifies its peers. This can take a long time due to the lock being held by the block minter or synchronizer, and this compounds with every peer that the transaction is routed through.
Validating signatures doesn't require a lock, and so can take place very soon after receipt of a new transaction. This change causes each node to broadcast a new transaction to its peers as soon as its signature is validated, rather than waiting until after the import.
When a notified peer then makes a request for the transaction data itself, this can now be loaded from the sig-valid import queue as an alternative to the repository (since they won't be in the repository until after the import, which likely won't have happened yet).
One small downside to this approach is that each unconfirmed transaction is now notified twice - once after the signature is deemed valid, and again in Controller.onNewTransaction(), but this should be an acceptable trade off given the speed improvements it should achieve. Another downside is that it could cause invalid transactions (with valid signatures) to propagate, but these would quickly be added to each peer's invalidUnconfirmedTransactions list after the import failure, and therefore be ignored.
Importing has to be single threaded since it requires the database lock, but there's nothing to stop us from validating signatures on multiple threads, as no lock is required. So it makes sense to separate these two functions to allow for possible multi threaded signature validation in the future, to speed up the process.
Everything remains single threaded in this commit. It should be functionally the same as before, to reduce risk.
Note: this relies on (a modified version of) liblitewallet-jni which is not included, but will ultimately be compiled for each supported architecture and hosted on QDN.
LiteWalletJni code is based on https://github.com/PirateNetwork/cordova-plugin-litewallet - thanks to @CryptoForge for the help in getting this up and running.
Note: it's important that this timestamp is set on a 1-hour boundary (such as 16:00:00) to ensure a clean switchover.
# Conflicts:
# src/main/java/org/qortal/block/BlockChain.java
Direct connections for arbitrary data are currently unlikely to succeed, because those allowing incoming connections generally have their slots maxed out and have reached maxPeers. The idea here is that some connections remain reserved for dedicated arbitrary data transfers, therefore temporarily circumventing the limit (up to a defined maximum number of reserved connections).
Arbitrary data connections will auto disconnect after 2 minutes (we might be able to reduce this at a later date), and it also probably makes sense for the requesting node to disconnect as soon as it has all the chunks that it needs (this part isn't implemented yet).
One downside of this feature is that the listen socket is now going to be accepting connections most of the time, since it is unlikely that we will regularly have 4 data peers connected. This could be improved by modifying the OP_ACCEPT behaviour based on whether we are expecting any data peers to connect. In most cases, this would allow it to remain closed. But for the sake of simplicity I will leave that optimization for a future commit.
This is used to force a quick disconnect for peers that are only connecting for the purposes of requesting data for a specific arbitrary transaction signature.
Note that it is currently not easy to distinguish between MESSAGE-type and PAYMENT-type AT transactions, so PAYMENT-type is currently the only one supported (and used). A hard fork will likely be needed in order to specify the type within each message.
This is a more standardized alternative to using GET /transactions/search?address=xyz. This avoids the need to build full transaction search ability into the lite node protocols right away.
This should bring in enough data for very basic chat and wallet functionality (using addresses rather than registered names).
Data currently comes from a single random peer, however this can be expanded to request from multiple peers to gain confidence in the accuracy of the data. If bad data is returned from a peer, it's not the end of the world since the transaction would just be considered invalid by full nodes and would be thrown out. But this should be mostly avoidable by taking data from multiple sources to improve confidence in its accuracy.
Lite nodes can't sync or mint blocks, and they also have a very limited ability to verify unconfirmed transactions due to a lack of contextual information (i.e. the blockchain). For now, most validation is skipped and they simply act as relays to help get transactions around the network. Full and topOnly nodes will disregard any invalid transactions upon receipt as usual, and since the lite nodes aren't signing any blocks, there is little risk to the reduced validation, other than the experience of the lite node itself. This can be tightened up considerably as the lite nodes become more powerful, but the current approach works as a PoC.
The command used was:
./protoc --plugin=protoc-gen-grpc-java=/Users/user/Downloads/protoc-gen-grpc-java-1.45.1-osx-x86_64.exe -I=src/main/resources/proto/zcash/ --java_out=src/main/java/ --grpc-java_out=src/main/java/ src/main/resources/proto/zcash/service.proto
Then repeat, replacing service.proto with compact_formats.proto and darkside.proto
Darkside isn't needed for mainnet functionality, but included for completeness, and might be useful for testing.
2022-04-14 20:07:42 +01:00
823 changed files with 90993 additions and 15014 deletions
Q-Apps are static web apps written in javascript, HTML, CSS, and other static assets. The key difference between a Q-App and a fully static site is its ability to interact with both the logged-in user and on-chain data. This is achieved using the API described in this document.
# Section 0: Basic QDN concepts
## Introduction to QDN resources
Each published item on QDN (Qortal Data Network) is referred to as a "resource". A resource could contain anything from a few characters of text, to a multi-layered directory structure containing thousands of files.
Resources are stored on-chain, however the data payload is generally stored off-chain, and verified using an on-chain SHA-256 hash.
To publish a resource, a user must first register a name, and then include that name when publishing the data. Accounts without a registered name are unable to publish to QDN from a Q-App at this time.
Owning the name grants update privileges to the data. If that name is later sold or transferred, the permission to update that resource is moved to the new owner.
## Name, service & identifier
Each QDN resource has 3 important fields:
- `name` - the registered name of the account that is publishing the data (which will hold update/edit privileges going forwards).<br/><br/>
- `service` - the type of content (e.g. IMAGE or JSON). Different services have different validation rules. See [list of available services](#services).<br/><br/>
- `identifier` - an optional string to allow more than one resource to exist for a given name/service combination. For example, the name `QortalDemo` may wish to publish multiple images. This can be achieved by using a different identifier string for each. The identifier is only unique to the name in question, and so it doesn't matter if another name is using the same service and identifier string.
## Shared identifiers
Since an identifier can be used by multiple names, this can be used to the advantage of Q-App developers as it allows for data to be stored in a deterministic location.
An example of this is the user's avatar. This will always be published with service `THUMBNAIL` and identifier `qortal_avatar`, along with the user's name. So, an app can display the avatar of a user just by specifying their name when requesting the data. The same applies when publishing data.
## "Default" resources
A "default" resource refers to one without an identifier. For example, when a website is published via the UI, it will use the user's name and the service `WEBSITE`. These do not have an identifier, and are therefore the "default" website for this name. When requesting or publishing data without an identifier, apps can either omit the `identifier` key entirely, or include `"identifier": "default"` to indicate that the resource(s) being queried or published do not have an identifier.
<aname="services"></a>
## Available service types
Here is a list of currently available services that can be used in Q-Apps:
### Public services ###
The services below are intended to be used for publicly accessible data.
IMAGE,
THUMBNAIL,
VIDEO,
AUDIO,
PODCAST,
VOICE,
ARBITRARY_DATA,
JSON,
DOCUMENT,
LIST,
PLAYLIST,
METADATA,
BLOG,
BLOG_POST,
BLOG_COMMENT,
GIF_REPOSITORY,
ATTACHMENT,
FILE,
FILES,
CHAIN_DATA,
STORE,
PRODUCT,
OFFER,
COUPON,
CODE,
PLUGIN,
EXTENSION,
GAME,
ITEM,
NFT,
DATABASE,
SNAPSHOT,
COMMENT,
CHAIN_COMMENT,
WEBSITE,
APP,
QCHAT_ATTACHMENT,
QCHAT_IMAGE,
QCHAT_AUDIO,
QCHAT_VOICE
### Private services ###
For the services below, data is encrypted for a single recipient, and can only be decrypted using the private key of the recipient's wallet.
QCHAT_ATTACHMENT_PRIVATE,
ATTACHMENT_PRIVATE,
FILE_PRIVATE,
IMAGE_PRIVATE,
VIDEO_PRIVATE,
AUDIO_PRIVATE,
VOICE_PRIVATE,
DOCUMENT_PRIVATE,
MAIL_PRIVATE,
MESSAGE_PRIVATE
## Single vs multi-file resources
Some resources, such as those published with the `IMAGE` or `JSON` service, consist of a single file or piece of data (the image or the JSON string). This is the most common type of QDN resource, especially in the context of Q-Apps. These can be published by supplying a base64-encoded string containing the data.
Other resources, such as those published with the `WEBSITE`, `APP`, or `GIF_REPOSITORY` service, can contain multiple files and directories. Publishing these kinds of files is not yet available for Q-Apps, however it is possible to retrieve multi-file resources that are already published. When retrieving this data (via FETCH_QDN_RESOURCE), a `filepath` must be included to indicate the file that you would like to retrieve. There is no need to specify a filepath for single file resources, as these will automatically return the contents of the single file.
## App-specific data
Some apps may want to make all QDN data for a particular service available. However, others may prefer to only deal with data that has been published by their app (if a specific format/schema is being used for instance).
Identifiers can be used to allow app developers to locate data that has been published by their app. The recommended approach for this is to use the app name as a prefix on all identifiers when publishing data.
For instance, an app called `MyApp` could allow users to publish JSON data. The app could choose to prefix all identifiers with the string `myapp_`, and then use a random string for each published resource (resulting in identifiers such as `myapp_qR5ndZ8v`). Then, to locate data that has potentially been published by users of MyApp, it can later search the QDN database for items with `"service": "JSON"` and `"identifier": "myapp_"`. The SEARCH_QDN_RESOURCES action has a `prefix` option in order to match identifiers beginning with the supplied string.
Note that QDN is a permissionless system, and therefore it's not possible to verify that a resource was actually published by the app. It is recommended that apps validate the contents of the resource to ensure it is formatted correctly, instead of making assumptions.
## Updating a resource
To update a resource, it can be overwritten by publishing with the same `name`, `service`, and `identifier` combination. Note that the authenticated account must currently own the name in order to publish an update.
## Routing
If a non-existent `filepath` is accessed, the default behaviour of QDN is to return a `404: File not found` error. This includes anything published using the `WEBSITE` service.
However, routing is handled differently for anything published using the `APP` service.
For apps, QDN automatically sends all unhandled requests to the index file (generally index.html). This allows the app to use custom routing, as it is able to listen on any path. If a file exists at a path, the file itself will be served, and so the request won't be sent to the index file.
It's recommended that all apps return a 404 page if a request isn't able to be routed.
# Section 1: Simple links and image loading via HTML
## Section 1a: Linking to other QDN websites / resources
The `qortal://` protocol can be used to access QDN data from within Qortal websites and apps. The basic format is as follows:
For cases where you would prefer to explicitly include an identifier (to remove ambiguity) you can use the keyword `default` to access a resource that doesn't have an identifier. For instance:
```
<ahref="qortal://WEBSITE/QortalDemo/default">link to root of website</a>
<ahref="qortal://WEBSITE/QortalDemo/default/minting-leveling/index.html">link to subpage of website</a>
```
## Section 1b: Linking to other QDN images
The same applies for images, such as displaying an avatar:
Javascript apps allow for much more complex integrations with Qortal's blockchain data.
## Section 2a: Direct API calls
The standard [Qortal Core API](http://localhost:12391/api-documentation) is available to websites and apps, and can be called directly using a standard AJAX request, such as:
However, this only works for read-only data, such as looking up transactions, names, balances, etc. Also, since the address of the logged in account can't be retrieved from the core, apps can't show personalized data with this approach.
## Section 2b: User interaction via qortalRequest()
To take things a step further, the qortalRequest() function can be used to interact with the user, in order to:
- Request address and public key of the logged in account
- Publish data to QDN
- Send chat messages
- Join groups
- Deploy ATs (smart contracts)
- Send QORT or any supported foreign coin
- Add/remove items from lists
In addition to the above, qortalRequest() also supports many read-only functions that are also available via direct core API calls. Using qortalRequest() helps with futureproofing, as the core APIs can be modified without breaking functionality of existing Q-Apps.
### Making a request
Qortal core will automatically inject the `qortalRequest()` javascript function to all websites/apps, which returns a Promise. This can be used to fetch data or publish data to the Qortal blockchain. This functionality supports async/await, as well as try/catch error handling.
```
async function myfunction() {
try {
let res = await qortalRequest({
action: "GET_ACCOUNT_DATA",
address: "QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2"
});
console.log(JSON.stringify(res)); // Log the response to the console
} catch(e) {
console.log("Error: " + JSON.stringify(e));
}
}
myfunction();
```
### Timeouts
Request timeouts are handled automatically when using qortalRequest(). The timeout value will differ based on the action being used - see `getDefaultTimeout()` in [q-apps.js](src/main/resources/q-apps/q-apps.js) for the current values.
If a request times out it will throw an error - `The request timed out` - which can be handled by the Q-App.
It is also possible to specify a custom timeout using `qortalRequestWithTimeout(request, timeout)`, however this is discouraged. It's more reliable and futureproof to let the core handle the timeout values.
# Section 3: qortalRequest Documentation
## Supported actions
Here is a list of currently supported actions:
- GET_USER_ACCOUNT
- GET_ACCOUNT_DATA
- GET_ACCOUNT_NAMES
- SEARCH_NAMES
- GET_NAME_DATA
- LIST_QDN_RESOURCES
- SEARCH_QDN_RESOURCES
- GET_QDN_RESOURCE_STATUS
- GET_QDN_RESOURCE_PROPERTIES
- GET_QDN_RESOURCE_METADATA
- GET_QDN_RESOURCE_URL
- LINK_TO_QDN_RESOURCE
- FETCH_QDN_RESOURCE
- PUBLISH_QDN_RESOURCE
- PUBLISH_MULTIPLE_QDN_RESOURCES
- DECRYPT_DATA
- SAVE_FILE
- GET_WALLET_BALANCE
- GET_BALANCE
- SEND_COIN
- SEARCH_CHAT_MESSAGES
- SEND_CHAT_MESSAGE
- LIST_GROUPS
- JOIN_GROUP
- DEPLOY_AT
- GET_AT
- GET_AT_DATA
- LIST_ATS
- FETCH_BLOCK
- FETCH_BLOCK_RANGE
- SEARCH_TRANSACTIONS
- GET_PRICE
- GET_LIST_ITEMS
- ADD_LIST_ITEMS
- DELETE_LIST_ITEM
More functionality will be added in the future.
## Example Requests
Here are some example requests for each of the above:
### Get address of logged in account
_Will likely require user approval_
```
let account = await qortalRequest({
action: "GET_USER_ACCOUNT"
});
let address = account.address;
```
### Get public key of logged in account
_Will likely require user approval_
```
let pubkey = await qortalRequest({
action: "GET_USER_ACCOUNT"
});
let publicKey = account.publicKey;
```
### Get account data
```
let res = await qortalRequest({
action: "GET_ACCOUNT_DATA",
address: "QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2"
});
```
### Get names owned by account
```
let res = await qortalRequest({
action: "GET_ACCOUNT_NAMES",
address: "QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2"
});
```
### Search names
```
let res = await qortalRequest({
action: "SEARCH_NAMES",
query: "search query goes here",
prefix: false, // Optional - if true, only the beginning of the name is matched
query: "search query goes here", // Optional - searches both "identifier" and "name" fields
identifier: "search query goes here", // Optional - searches only the "identifier" field
name: "search query goes here", // Optional - searches only the "name" field
prefix: false, // Optional - if true, only the beginning of fields are matched in all of the above filters
exactMatchNames: true, // Optional - if true, partial name matches are excluded
default: false, // Optional - if true, only resources without identifiers are returned
mode: "LATEST", // Optional - whether to return all resources or just the latest for a name/service combination. Possible values: ALL,LATEST. Default: LATEST
minLevel: 1, // Optional - whether to filter results by minimum account level
includeStatus: false, // Optional - will take time to respond, so only request if necessary
includeMetadata: false, // Optional - will take time to respond, so only request if necessary
nameListFilter: "QApp1234Subscriptions", // Optional - will only return results if they are from a name included in supplied list
followedOnly: false, // Optional - include followed names only
// before: 1683546000000, // Optional - limit to resources created before timestamp
// after: 1683546000000, // Optional - limit to resources created after timestamp
limit: 100,
offset: 0,
reverse: true
});
```
### Search QDN resources (multiple names)
```
let res = await qortalRequest({
action: "SEARCH_QDN_RESOURCES",
service: "THUMBNAIL",
query: "search query goes here", // Optional - searches both "identifier" and "name" fields
identifier: "search query goes here", // Optional - searches only the "identifier" field
names: ["QortalDemo", "crowetic", "AlphaX"], // Optional - searches only the "name" field for any of the supplied names
prefix: false, // Optional - if true, only the beginning of fields are matched in all of the above filters
exactMatchNames: true, // Optional - if true, partial name matches are excluded
default: false, // Optional - if true, only resources without identifiers are returned
mode: "LATEST", // Optional - whether to return all resources or just the latest for a name/service combination. Possible values: ALL,LATEST. Default: LATEST
includeStatus: false, // Optional - will take time to respond, so only request if necessary
includeMetadata: false, // Optional - will take time to respond, so only request if necessary
nameListFilter: "QApp1234Subscriptions", // Optional - will only return results if they are from a name included in supplied list
followedOnly: false, // Optional - include followed names only
// before: 1683546000000, // Optional - limit to resources created before timestamp
// after: 1683546000000, // Optional - limit to resources created after timestamp
limit: 100,
offset: 0,
reverse: true
});
```
### Fetch QDN single file resource
```
let res = await qortalRequest({
action: "FETCH_QDN_RESOURCE",
name: "QortalDemo",
service: "THUMBNAIL",
identifier: "qortal_avatar", // Optional. If omitted, the default resource is returned, or you can alternatively use the keyword "default"
encoding: "base64", // Optional. If omitted, data is returned in raw form
rebuild: false
});
```
### Fetch file from multi file QDN resource
Data is returned in the base64 format
```
let res = await qortalRequest({
action: "FETCH_QDN_RESOURCE",
name: "QortalDemo",
service: "WEBSITE",
identifier: "default", // Optional. If omitted, the default resource is returned, or you can alternatively request that using the keyword "default", as shown here
filepath: "index.html", // Required only for resources containing more than one file
rebuild: false
});
```
### Get QDN resource status
```
let res = await qortalRequest({
action: "GET_QDN_RESOURCE_STATUS",
name: "QortalDemo",
service: "THUMBNAIL",
identifier: "qortal_avatar", // Optional
build: true // Optional - request that the resource is fetched & built in the background
Note: this publishes a single, base64-encoded file. Multi-file resource publishing (such as a WEBSITE or GIF_REPOSITORY) is not yet supported via a Q-App. It will be added in a future update.
```
let res = await qortalRequest({
action: "PUBLISH_QDN_RESOURCE",
name: "Demo", // Publisher must own the registered name - use GET_ACCOUNT_NAMES for a list
service: "IMAGE",
identifier: "myapp-image1234" // Optional
data64: "base64_encoded_data",
// filename: "image.jpg", // Optional - to help apps determine the file's type
// title: "Title", // Optional
// description: "Description", // Optional
// category: "TECHNOLOGY", // Optional
// tag1: "any", // Optional
// tag2: "strings", // Optional
// tag3: "can", // Optional
// tag4: "go", // Optional
// tag5: "here", // Optional
// encrypt: true, // Optional - to be used with a private service
// recipientPublicKey: "publickeygoeshere" // Only required if `encrypt` is set to true
});
```
### Publish multiple resources at once to QDN
_Requires user approval_.<br/>
Note: each resource being published consists of a single, base64-encoded file, each in its own transaction. Useful for publishing two or more related things, such as a video and a video thumbnail.
```
let res = await qortalRequest({
action: "PUBLISH_MULTIPLE_QDN_RESOURCES",
resources: [
name: "Demo", // Publisher must own the registered name - use GET_ACCOUNT_NAMES for a list
service: "IMAGE",
identifier: "myapp-image1234" // Optional
data64: "base64_encoded_data",
// filename: "image.jpg", // Optional - to help apps determine the file's type
// title: "Title", // Optional
// description: "Description", // Optional
// category: "TECHNOLOGY", // Optional
// tag1: "any", // Optional
// tag2: "strings", // Optional
// tag3: "can", // Optional
// tag4: "go", // Optional
// tag5: "here", // Optional
// encrypt: true, // Optional - to be used with a private service
// recipientPublicKey: "publickeygoeshere" // Only required if `encrypt` is set to true
fee: 0.00000020 // Optional fee per byte (default fee used if omitted, recommended) - not used for QORT or ARRR
});
```
### Search or list chat messages
```
let res = await qortalRequest({
action: "SEARCH_CHAT_MESSAGES",
before: 999999999999999,
after: 0,
txGroupId: 0, // Optional (must specify either txGroupId or two involving addresses)
// involving: ["QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2", "QSefrppsDCsZebcwrqiM1gNbWq7YMDXtG2"], // Optional (must specify either txGroupId or two involving addresses)
// reference: "reference", // Optional
// chatReference: "chatreference", // Optional
// hasChatReference: true, // Optional
encoding: "BASE64", // Optional (defaults to BASE58 if omitted)
Note: this returns a "Resource does not exist" error if a non-existent resource is requested.
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "THUMBNAIL",
name: "QortalDemo",
identifier: "qortal_avatar"
// path: "filename.jpg" // optional - not needed if resource contains only one file
});
```
### Get URL to load a QDN website
Note: this returns a "Resource does not exist" error if a non-existent resource is requested.
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "QortalDemo",
});
```
### Get URL to load a specific file from a QDN website
Note: this returns a "Resource does not exist" error if a non-existent resource is requested.
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "AlphaX",
path: "/assets/img/logo.png"
});
```
### Link/redirect to another QDN website
Note: an alternate method is to include `<a href="qortal://WEBSITE/QortalDemo">link text</a>` within your HTML code.
```
let res = await qortalRequest({
action: "LINK_TO_QDN_RESOURCE",
service: "WEBSITE",
name: "QortalDemo",
});
```
### Link/redirect to a specific path of another QDN website
Note: an alternate method is to include `<a href="qortal://WEBSITE/QortalDemo/minting-leveling/index.html">link text</a>` within your HTML code.
```
let res = await qortalRequest({
action: "LINK_TO_QDN_RESOURCE",
service: "WEBSITE",
name: "QortalDemo",
path: "/minting-leveling/index.html"
});
```
### Get the contents of a list
_Requires user approval_
```
let res = await qortalRequest({
action: "GET_LIST_ITEMS",
list_name: "followedNames"
});
```
### Add one or more items to a list
_Requires user approval_
```
let res = await qortalRequest({
action: "ADD_LIST_ITEMS",
list_name: "blockedNames",
items: ["QortalDemo"]
});
```
### Delete a single item from a list
_Requires user approval_.
Items must be deleted one at a time.
```
let res = await qortalRequest({
action: "DELETE_LIST_ITEM",
list_name: "blockedNames",
item: "QortalDemo"
});
```
# Section 4: Examples
Some example projects can be found [here](https://github.com/Qortal/Q-Apps). These can be cloned and modified, or used as a reference when creating a new app.
## Sample App
Here is a sample application to display the logged-in user's avatar:
```
<html>
<head>
<script>
async function showAvatar() {
try {
// Get QORT address of logged in account
let account = await qortalRequest({
action: "GET_USER_ACCOUNT"
});
let address = account.address;
console.log("address: " + address);
// Get names owned by this account
let names = await qortalRequest({
action: "GET_ACCOUNT_NAMES",
address: address
});
console.log("names: " + JSON.stringify(names));
if (names.length == 0) {
console.log("User has no registered names");
return;
}
// Download base64-encoded avatar of the first registered name
Publishing an in-development app to mainnet isn't recommended. There are several options for developing and testing a Q-app before publishing to mainnet:
### Preview mode
Select "Preview" in the UI after choosing the zip. This allows for full Q-App testing without the need to publish any data.
### Testnets
For an end-to-end test of Q-App publishing, you can use the official testnet, or set up a single node testnet of your own (often referred to as devnet) on your local machine. See [Single Node Testnet Quick Start Guide](testnet/README.md#quick-start).
### Debugging
It is recommended that you develop and test in a web browser, to allow access to the javascript console. To do this:
1. Open the UI app, then minimise it.
2. In a Chromium-based web browser, visit: http://localhost:12388/
3. Log in to your account and then preview your app/website.
4. Go to `View > Developer > JavaScript Console`. Here you can monitor console logs, errors, and network requests from your app, in the same way as any other web-app.
The Qortal Core is the blockchain and node component of the overall project. It contains the primary API, and ability to make calls to create transactions, and interact with the Qortal Blockchain Network.
In order to run the Qortal Core, a machine with java 11+ installed is required. Minimum RAM specs will vary depending on settings, but as low as 4GB of RAM should be acceptable in most scenarios.
Qortal is a complete infrastructure platform with a blockchain backend, it is capable of indefinite web and application hosting with no continual fees, replacement of DNS and centralized name systems and communications systems, and is the foundation of the next generation digital infrastructure of the world. Qortal is unique in nearly every way, and was written from scratch to address as many concerns from both the existing 'blockchain space' and the 'typical internet' as possible, while maintaining a system that is easy to use and able to run on 'any' computer.
Qortal contains extensive functionality geared toward complete decentralization of the digital world. Removal of 'middlemen' of any kind from all transactions, and ability to publish websites and applications that require no continual fees, on a name that is truly owned by the account that registered it, or purchased it from another. A single name on Qortal is capable of being both a namespace and a 'username'. That single name can have an application, website, public and private data, communications, authentication, the namespace itself and more, which can subsequently be sold to anyone else without the need to change any type of 'hosting' or DNS entries that do not exist, email that doesn't exist, etc. Maintaining the same functionality as those replaced features of web 2.0.
Over time Qortal has progressed into a fully featured environment catering to any and all types of people and organizations, and will continue to advance as time goes on. Brining more features, capability, device support, and availale replacements for web2.0. Ultimately building a new, completely user-controlled digital world without limits.
Qortal has no owner, no company on top of it, and is completely community built, run, and funded. A community-established and run group of developers known as the 'dev-group' or Qortal Development Group, make group_approval based decisions for the project's future. If you are a developer interested in assisting with the project, you meay reach out to the Qortal Development Group in any of the available Qortal community locations. Either on the Qortal network itself, or on one of the temporary centralized social media locations.
Building the future one block at a time. Welcome to Qortal.
# Building the Qortal Core from Source
## Build / run
@ -10,3 +25,21 @@
- Run JAR in same working directory as *settings.json*: `java -jar target/qortal-1.0.jar`
- Wrap in shell script, add JVM flags, redirection, backgrounding, etc. as necessary.
- Or use supplied example shell script: *start.sh*
# Using a pre-built Qortal 'jar' binary
If you would prefer to utilize a released version of Qortal, you may do so by downloading one of the available releases from the releases page, that are also linked on https://qortal.org and https://qortal.dev.
# Learning Q-App Development
https://qortal.dev contains dev documentation for building JS/React (and other client-side languages) applications or 'Q-Apps' on Qortal. Q-Apps are published on Registered Qortal Names, and aside from a single Name Registration fee, and a fraction of QORT for a publish transaction, require zero continual costs. These applications get more redundant with each new access from a new Qortal Node, making your application faster for the next user to download, and stronger as time goes on. Q-Apps live indefinitely in the history of the blockchain-secured Qortal Data Network (QDN).
# How to learn more
If the project interests you, you may learn more from the various web2 and QDN based websites focused on introductory information.
https://qortal.org - primary internet presence
https://qortal.dev - secondary and development focused website with links to many new developments and documentation
https://wiki.qortal.org - community built and managed wiki with detailed information regarding the project
links to telegram and discord communities are at the top of https://qortal.org as well.
StringqdnContextVar=String.format("<script>var _qdnContext=\"%s\"; var _qdnTheme=\"%s\"; var _qdnService=\"%s\"; var _qdnName=\"%s\"; var _qdnIdentifier=\"%s\"; var _qdnPath=\"%s\"; var _qdnBase=\"%s\"; var _qdnBaseWithPath=\"%s\";</script>",qdnContext,theme,service,name,identifier,path,qdnBase,qdnBaseWithPath);
@Schema(description="Litecoin BIP32 extended public key",example="tpub___________________________________________________________________________________________________________")
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.