01-28-2019 11:02 AM
Hi
There may be a protocol to do this, there may not. If anyone can shed any light on this problem, please let me know.
I have a network communications link based on variable-length messages over TCP/IP. I am not (do not want) to use NI proprietary network global variables or any similar system as it locks me out of being able to develop other nodes in another language or environment.
My variable length messages are formed as clusters, which are then turned into an array of bytes, and sent with the first two-bytes representing the length of the message -
LENGTH | BODY
This is received at the other end which streams the number of bytes indicated by the LENGTH, and converts it back to a cluster. This works fine, until your link (or your application due to start-up issues or other) for some reason loses synch with the length/body/length/body sequence of streaming TCP/IP data bytes. Body-bytes get interpreted as length-bytes and the application receives bad messages.
To catch bad messages, I added a checksum -
LENGTH | BODY | CHECKSUM
This is received at the other end and the entire message thrown away if the checksum fails.
The only problem, is that this does not help me get resynchronised. And loses me a message which probably okay. The only solution I can think of to minimise disruption is to add a unique sequence of bytes at the start to help with determining where the start of a message is -
SYNC | LENGTH | BODY | CHECKSUM
This works great, until my byte body unwittingly contains a SYNC sequence. Although the checksum catches any badly received message until I resync, it does mean I lose a message purely because of how I programmed my variable-length messaging system and ignored body content.
The only way to overcome the above is to break-up an body sync-values during transmission, and take this into account during reception.
But before I go launching into a fiddly bit of programming, can anyone tell me if there is a ready written protocol which deals with TCP/IP byte streams to allow variable length message transmission?
I don't want to start using time as a synchronisation mechanism as this limits my messaging rate and I suddenly need to establish common timeouts at both the transmitter and receiver. I would much rather a protocol which logically separates variable-length messages.
Any ideas??
Solved! Go to Solution.
01-28-2019 12:25 PM - edited 01-28-2019 12:25 PM
It is just an inherent issue with binary (not strictly ASCII) data transport. Keep everything in a buffer. Scan for the first sync, and If the checksum fails, don't throw everything away. It might just be something got lost. Just throw away anything up until the next sync.
I use a sync at the beginning and end of the message (different syncs for both). This saves me from having to calculate a checksum all the time.
01-28-2019 01:23 PM
TCP already has checksums and resend corrupted IP packets. It never loses sync. Most likely you have a bug in either sending or receiving end. You'd be better off just fixing that bug, than building lots of extra overhead to recover from it's effect. TCP is reliable.
01-28-2019 03:11 PM
@drjdpowell wrote:
TCP already has checksums and resend corrupted IP packets. It never loses sync. Most likely you have a bug in either sending or receiving end. You'd be better off just fixing that bug, than building lots of extra overhead to recover from it's effect. TCP is reliable.
We're talking about data that that may be sent over multiple packets. It is possible for packets to be missing (unreliable connection possibly?). I don't think it's unreasonable to assume a possibility that data may be incomplete.
01-28-2019 04:32 PM
IP packets can be lost; that's why TCP was invented. It's a protocal for resending lost or corrupted IP packets, so it is redundant to do extra error checking on top of it.
01-28-2019 04:44 PM
Yes, and if TCP and transmission lines were perfect, I would never see an incomplete web page.
01-28-2019 04:57 PM
@Mancho00 wrote:
Yes, and if TCP and transmission lines were perfect, I would never see an incomplete web page.
Web pages use many parallel connections. Anything that shows by the browser was from a complete connection. Successful connections are "perfect" (except for the rare checksum collision) and correctly reassembled even if the packets arrived out of order or some even in duplicate. TCP will request retransmission of missing packets, and if it never arrives, the entire transmission is discarded. Any connection that completes without error is typically perfect.
01-28-2019 05:03 PM - edited 01-28-2019 05:06 PM
Figures I would get an education. Thanks.
Makes me glad I didn't bother with a checksum on my latest.
01-28-2019 05:45 PM
There are two applications of checksums in communications, that I can think of, where they make sense: serial transmission and UDP.
01-28-2019 06:35 PM
@billko wrote:
There are two applications of checksums in communications, that I can think of, where they make sense: serial transmission and UDP.
At the user level, that is correct. As already stated, TCP has checksum as part of the packet. You as the user just don't have to worry about it.