LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP/IP protocol on subnet

Solved!
Go to solution
Highlighted

Hi,

 

(this question is not labview related per say, but as implement it in LV I guess this could a good forum to get some extra info)

 

I need to do some client server communication over TCP/IP and I'm wondering about the protocol I should use. First I thought that once I have my message built I take its length (in a 4byte format) and prepend it to message. In this way the server should wait for the first 4 bytes and then - knowing the message length - it just reads the right amount of bytes from the port. This looks very simple to implement and I have seen this implemented with some sensors.

 

A colleague of mine disagrees and suggests that my package should have a fixed header string, a set of termination bytes and maybe even a CRC code inside just to be sure that the server/client can detect if a failure occurs with the packet. His example was: what if the first 4 bytes indicating 100byte message, but only 99 arrives. How to sync back then?

 

I'm not an expert in TCP but I assume that it has all the built in CRCs, error correction, error detection etc, so I dont understand if a more complicated message structure would give me any advantage as exchange for the more complicated implementation. I could be horribly wrong though and maybe its super obvious that my simple solution will fail in 5 minutes in real life.

 

Let me know your thoughts. Thanks!

 

(the server and the client(s) will be in the same company network so packets dont need to travel on the internet. They wont necessarily be on the same subnet though. IPs can be considered static.)

0 Kudos
Message 1 of 6
(241 Views)

I disagree with  the part about setting termination bytes.  What kind of data will you be sending?  Suppose it is a file, or some pure binary stream where any byte has the possibility of being present in valid data.  If you picked a termination byte, or even a series of bytes, it is always possible that byte or combination will appear in the valid data and cause you to terminate the message when it isn't complete.

 

A header isn't a bad idea.  It would allow you do know you've got the start of a message, or perhaps information on the type of message.  (Suppose you were sending different commands.  The header could tell the receive in a code what type of message is coming.)  Use the 4 bytes as a data message length.  And a checksum that is appended to the end is a good safety measure to help assure something didn't get missed or corrupted along the way.  But the TCP/IP protocol has features built into it to help prevent that from happening.

0 Kudos
Message 2 of 6
(215 Views)

TCP/IP by definition guarentees the messages are complete, intact and in order. That is what the TCP part of TCP/IP is all about.

 

So as long as the message length is prepended to the front of the packet using the "read length acquire the rest" is all you need.

 

But you can move beyond that and follow the idea of the OSI 7-layer model you are then free to build on that foundation and implement your own protocols and connections as needed.

 

Your associate needs to understand that there is CRC built into each IP packet. If the CRC does not pass the packet it tossed.

 

No need to overly complicate things.

 

Ben

Message 3 of 6
(207 Views)

This would be a megasimple "protocol". 1byte which identifies the command (there will be no more than 20-30 commands) and the some bytes representing the data belonging to the command (~parameters).

 

Thanks for sharing your thoughts.

0 Kudos
Message 4 of 6
(190 Views)
Solution
Accepted by topic author 1984

"

This would be a megasimple "protocol". 1byte which identifies the command (there will be no more than 20-30 commands) and the some bytes representing the data belonging to the command (~parameters).

"

If you make that "one byte" an enum, you can use it to select a case in which you can cast the "parameters" to an associated LV cluster appropriate for the command type.

 

I often cast my messages as a cluster that has an enum defining the message type and a variant to carry the data. The cluster get flattened to a string and then the length of the string is prepended as you described. Unraveling the packet on the receive end is dirt simple.

 

Enough for now,

 

Ben  

Message 5 of 6
(185 Views)

It was very useful. Thanks!

0 Kudos
Message 6 of 6
(144 Views)