BlockMessage was broken because the repository 'connection' associated with the message's Block object was closed between message queuing and message sending.
The fix was to serialize Message subclasses on construction, thus freeing reliance on objects passed into constructor.
The serialized byte[] is held by the message between queuing and sending.
This forces messages into one of two 'modes': outgoing or incoming.
Outgoing messages contain serialized byte[] whereas incoming messages unpack a ByteBuffer into Message subclass fields.
As a result, all network message types have been refactored in this way.
More details in Message's class comment.
A knock-on effect is that incoming messages cannot then be sent out - a new message needs to be constructed.
Some changes needed to Arbitrary controller package classes in this respect.
Bonus: Network no longer needs broadcast threads because 'broadcasting' is now simply the act of queuing a message for many peers.
Instead of synchronizing/blocking in Peer.sendMessage(),
we queue messages in a concurrent blocking TransferQueue, with timeout.
In EPC, ChannelWriteTasks consume from TransferQueue, unblocking callers to Peer.sendMessage().
If a new message is to be sent, or socket output buffer is full,
then OP_WRITE is used to wait for socket to become writable again.
Only one ChannelWriteTask per peer can be active/pending at a time.
Each ChannelWriteTask tries to send as much as it can in one go.
Other minor tidy-ups.
As per work done by szisti in PR#45:
Extracted network 'Tasks' to their own classes.
Network.NetworkProcessor reduced to only producing Tasks.
Improved usage of SocketChannel interest-ops.
Eventually this might lead to reducing task-producing synchronization lock into more granular locks.
Work still needed to convert sending messages to a queue and to make use of OP_WRITE instead of sleeping to wait for socket buffer to empty.
Disabled PeerConnectTask producer from checking against connected peers via DNS as it's too slow.
Swapped Peer's replyQueues from SynchronizedMap(wrapped HashMap) to ConcurrentHashMap.
Other minor changes within networking.
As per work done by szisti in PR#45:
Extracted MessageException from inside Message into its own class.
Extracted MessageType from inside Message into its own class.
Converted reflection-based Message.fromByteBuffer method call to non-reflection, functional interface, method-reference.
This should have minor performance improvement but stronger method signature and type enforcement, as well as better IDE integration.
Message.fromByteBuffer method 'contract' tightened up to:
1. throw BufferUnderflowException if there are not enough bytes to deserialize message
2. throw MessageException if bytes contain invalid data
3. should not return null
Message.toData method 'contract' tightened up to:
1. return null if the message has no payload to serialize
2. throw IOException directly - no need to try-catch in each subclass
Several Message-subclass fields now marked 'final' as per IDE suggestion.
Several Message-subclass fromByteBuffer() method signatures have changed 'throws' list.
Several bytes.remaining() != some-value changed to bytes.remaining() < some-value as per new contract.
Some bytes.remaining() checks removed for fixed-length messages because we can rely on ByteBuffer throwing BufferUnderflowException.
Some bytes.remaining() checks retained for variable-length messages, or messages that read a large amount of data, to prevent wasted memory allocations.
Other minor tidying up
Temporarily increase sleep from 1ms to 100ms when waiting for outgoing socket buffer to empty.
Real fix is to rewrite using an outgoing message queue and OP_WRITE interest op.
De-register a peer's socket channel OP_READ interest op when producing a ChannelTask for that peer.
This should prevent duplicate ChannelTasks for the same peer.
Re-register OP_READ once node has read from peer's channel.
When node has reached max connections, Network will ignore pending incoming connections by:
1. not calling accept()
2. de-registering OP_ACCEPT 'interest op' on the listen socket's channel
When a peer disconnects, Network might re-register OP_ACCEPT interest op on listen socket.
Slight reworking of EPC to simplify when producer can block
and generally make some of the conditional code more readable.
Improved logging with task class names and logging level editable during runtime!
Use /peer/enginestats?newLoggingLevel=DEBUG (or TRACE or back to INFO) to change.
Note that it is currently not easy to distinguish between MESSAGE-type and PAYMENT-type AT transactions, so PAYMENT-type is currently the only one supported (and used). A hard fork will likely be needed in order to specify the type within each message.
This is a more standardized alternative to using GET /transactions/search?address=xyz. This avoids the need to build full transaction search ability into the lite node protocols right away.
This should bring in enough data for very basic chat and wallet functionality (using addresses rather than registered names).
Data currently comes from a single random peer, however this can be expanded to request from multiple peers to gain confidence in the accuracy of the data. If bad data is returned from a peer, it's not the end of the world since the transaction would just be considered invalid by full nodes and would be thrown out. But this should be mostly avoidable by taking data from multiple sources to improve confidence in its accuracy.
Lite nodes can't sync or mint blocks, and they also have a very limited ability to verify unconfirmed transactions due to a lack of contextual information (i.e. the blockchain). For now, most validation is skipped and they simply act as relays to help get transactions around the network. Full and topOnly nodes will disregard any invalid transactions upon receipt as usual, and since the lite nodes aren't signing any blocks, there is little risk to the reduced validation, other than the experience of the lite node itself. This can be tightened up considerably as the lite nodes become more powerful, but the current approach works as a PoC.