LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Robust Variable Length TCP/IP Messaging

Solved!
Go to solution

Hi

 

You are correct, I could rely on the TCP/IP checksum. And I could just assume the TCP/IP link is reliable (which it is) and just blindly assume sync is never lost. I suppose its reasonable to assume that after a listener accepts a connection, you only get complete LENGTH/BODY messages. And up until recently, I'd have followed that approach myself. But I have come across an application where sync is lost. It was an application bug which I corrected, but it got me thinking that I am putting ALOT of trust in third-party code to reliably convey my variable-length messages. And not just that code on my application host, but the entire internet. Things start-up/drop-out/reset/time-out/get-hacked all along the link. And some of that behaviour is outside of the protocol ability to detect. Like web-page content failure. And I don't really know how its implemented and what gotcha's actually exist. So call me paranoid, but an extra level of application synchronisation checking when sending/receiving variable-length messages is not a bad thing in my opinion. I also don't want to have to close/reopen connections when sync is lost as that is assuming a connection-failure, which initially anyway, I can assume it is not. I suppose as a fall back, that would be the ultimate RESET for the communication link, but I would like to avoid it as a the first fall-back. And adding sync-checking would let me jump to a UDP protocol if I want to avoid TCP/IP connection reset pauses, as it gives me control whether to drop a message or reset a connection.

 

So, from what I asked above, Yes, adding a checksum is expensive and maybe I will move to a Start-SYNC/Stop-SYNC approach. But still, a checksum which I would have control over, could be as strong (or disabled) as I like at the application level. And if UDP were used, a substitute for what TCP/IP provides.

 

But it would be nice to see code which does all this already, if it exists.

Any help?


 

“All through my life I've had this strange unaccountable feeling that something was going on in the world, something big, even sinister, and no one would tell me what it was."
"No," said the old man, "that's just perfectly normal paranoia. Everyone in the Universe has that.”
Douglas Adams, The Hitchhiker's Guide to the Galaxy    

0 Kudos
Message 11 of 14
(668 Views)

Your implementation of a checking routine will almost certainly lower the reliability fo the TCP/IP link,  The TCP/IP transfer protocol must be one of the most tested, optimised and reliable things in the world.  Modern culture is practically built on it.

 

Don't take this the wrong way, but you are NOT good enough to improve on the reliability of TCP/IP. Neither am I and neither is anyone probably.

 

But don't confuse the transfer layer with the protocol of data being sent over it.  While the transfer layer is reliable, the interpretation of data over that link may not be.  THERE you can introduce checks.  You can even to conversions on either side to guarantee that sentinel values are NOT contained in the actual message data, giving you a suitable method for looking for markers in the data stream.  Look up 8/10b encoding for an example of this.  Bear in mind it's not trivial. It also does things you won't need, but the dictionary-based translation of input data to transfer data could be interesting for you.  You could generate a 3-Byte dictionary for 2-Byte data (That's only 65k entries).  That way, you can define any of the remaining 3-Byte pattern not present in the dictionary as your "Start" "Stop" "Sync" or whatever you want.  This would build on top of TCP.  The error correction of TCP would just let you know that the data you receive is the same as the data which was sent (physical impediments notwithstanding).

0 Kudos
Message 12 of 14
(666 Views)
Solution
Accepted by topic author skol45uk

Don't get me wrong, I am not trying to improve TCP/IP. I want to treat it for what it is. A byte stream. And not as an infallible byte stream either. Yes it underpins most of the digital world. But TCP/IP does leave through bad bytes. Its CRC checksums can fail. It does have weaknesses. Otherwise why bother with any other protocols for say, deep space telemetry or trunk data transfer. Each is suited to its specialisation. TCP/IP I look at as a "consumer" protocol. Just like Windows is well used, it is not ideal for all computing situations.

 

What I wanted is to find an implementation which gives me the ability to detect synchronisation and message failure at my Application layer. Irrespective of the Transport layer I employ. I have now found this -

COBS

I think I can now put something together to beef-up my transfer of variable messages.

Thanks.

 

 

0 Kudos
Message 13 of 14
(656 Views)

@skol45uk wrote:

Otherwise why bother with any other protocols for say, deep space telemetry or trunk data transfer. Each is suited to its specialisation. TCP/IP I look at as a "consumer" protocol.


You cannot resend data due to the transmission taking A LONG time (minutes to hours for a message to even be received), making TCP useless in that situation.  So space transmissions depend a lot of things like Reed-Solomon and other bit correction algorithms.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 14 of 14
(652 Views)