LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Robust Variable Length TCP/IP Messaging

Solved!
Go to solution

Hi

 

There may be a protocol to do this, there may not. If anyone can shed any light on this problem, please let me know.

 

I have a network communications link based on variable-length messages over TCP/IP. I am not (do not want) to use NI proprietary network global variables or any similar system as it locks me out of being able to develop other nodes in another language or environment.

 

My variable length messages are formed as clusters, which are then turned into an array of bytes, and sent with the first two-bytes representing the length of the message -

LENGTH | BODY

This is received at the other end which streams the number of bytes indicated by the LENGTH, and converts it back to a cluster. This works fine, until your link (or your application due to start-up issues or other) for some reason loses synch with the length/body/length/body sequence of streaming TCP/IP data bytes. Body-bytes get interpreted as length-bytes and the application receives bad messages.

 

To catch bad messages, I added a checksum -

LENGTH | BODY | CHECKSUM

This is received at the other end and the entire message thrown away if the checksum fails.

 

The only problem, is that this does not help me get resynchronised. And loses me a message which probably okay. The only solution I can think of to minimise disruption is to add a unique sequence of bytes at the start to help with determining where the start of a message is -

SYNC | LENGTH | BODY | CHECKSUM

This works great, until my byte body unwittingly contains a SYNC sequence. Although the checksum catches any badly received message until I resync, it does mean I lose a message purely because of how I programmed my variable-length messaging system and ignored body content.

 

The only way to overcome the above is to break-up an body sync-values during transmission, and take this into account during reception.

 

But before I go launching into a fiddly bit of programming, can anyone tell me if there is a ready written protocol which deals with TCP/IP byte streams to allow variable length message transmission?

 

I don't want to start using time as a synchronisation mechanism as this limits my messaging rate and I suddenly need to establish common timeouts at both the transmitter and receiver. I would much rather a protocol which logically separates variable-length messages.

 

Any ideas??

 

0 Kudos
Message 1 of 14
(3,987 Views)

It is just an inherent issue with binary (not strictly ASCII) data transport. Keep everything in a buffer.  Scan for the first sync, and If the checksum fails, don't throw everything away.  It might just be something got lost. Just throw away anything up until the next sync.

 

I use a sync at the beginning and end of the message (different syncs for both).  This saves me from having to calculate a checksum all the time.

0 Kudos
Message 2 of 14
(3,959 Views)

TCP already has checksums and resend corrupted IP packets.  It never loses sync.  Most likely you have a bug in either sending or receiving end.  You'd be better off just fixing that bug, than building lots of extra overhead to recover from it's effect.  TCP is reliable.

0 Kudos
Message 3 of 14
(3,950 Views)

@drjdpowell wrote:

TCP already has checksums and resend corrupted IP packets.  It never loses sync.  Most likely you have a bug in either sending or receiving end.  You'd be better off just fixing that bug, than building lots of extra overhead to recover from it's effect.  TCP is reliable.


We're talking about data that that may be sent over multiple packets.  It is possible for packets to be missing (unreliable connection possibly?).  I don't think it's unreasonable to assume a possibility that data may be incomplete.

0 Kudos
Message 4 of 14
(3,933 Views)

IP packets can be lost; that's why TCP was invented.  It's a protocal for resending lost or corrupted IP packets, so it is redundant to do extra error checking on top of it.  

Message 5 of 14
(3,924 Views)

Yes, and if TCP and transmission lines were perfect, I would never see an incomplete web page.

0 Kudos
Message 6 of 14
(3,920 Views)

@Mancho00 wrote:

Yes, and if TCP and transmission lines were perfect, I would never see an incomplete web page.

Web pages use many parallel connections. Anything that shows by the browser was from a complete connection. Successful connections are "perfect" (except for the rare checksum collision) and correctly reassembled even if the packets arrived out of order or some even in duplicate. TCP will request retransmission of missing packets, and if it never arrives, the entire transmission is discarded. Any connection that completes without error is typically perfect.

Message 7 of 14
(3,918 Views)

Figures I would get an education.  Thanks.

Makes me glad I didn't bother with a checksum on my latest.Smiley Happy

0 Kudos
Message 8 of 14
(3,915 Views)

There are two applications of checksums in communications, that I can think of, where they make sense: serial transmission and UDP.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 9 of 14
(3,901 Views)

@billko wrote:

There are two applications of checksums in communications, that I can think of, where they make sense: serial transmission and UDP.


At the user level, that is correct.  As already stated, TCP has checksum as part of the packet.  You as the user just don't have to worry about it.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 10 of 14
(3,892 Views)