LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP Read - messages randomly truncated

Solved!
Go to solution

Hello!

We have a set of VI we ship with our devices. These didn't have this problem before. The problem itself occurred quite suddenly and makes us wonder why it occurs now. When trying to reconstruct I found that TCP Read randomly reads less bytes than expected. Following setup:

LabView 2015, Windows 10, VISA 18.5. The test VI (attached) shall read ModBus register 151 (ModBus RTU over Ethernet) from IP 192.168.0.2 and port 5025. See attached video 2019-09-20 at 09-41-46_Trim.mp3. The incoming message is randomly truncated. 

 

When doing the same in a different tool (Docklight Scripting), this never happens. See video 2019-09-20 at 10-04-20.mp4

We are pretty sure, this didn't happen in the past years with older LabView and VISA versions. Any idea?

 

 

0 Kudos
Message 1 of 11
(4,655 Views)

You use the native TCP Read. No NI-VISA involved at all in this!

 

As to your problem, you use TCP Read in Immediate mode. Read the online help for TCP Read. The description for Immediate mode is this:

Immediate—Waits until the function receives any bytes from those you specify in bytes to read. Waits the full timeout only if the function receives no bytes. Returns the number of bytes so far. Reports a timeout error if the function receives no bytes.

 

So the Read returns IMMEDIATELY as soon as it sees ANY bytes in the incoming buffer OR the timeout occures.

Do you really need immediate mode? Can you make the responding device return a termination character that you can use. If it happens to return <CR><LF> as is common for text based network protocols you should simply use the according mode instead, otherwise you will have to hack a little in the receiving of your strings and do termination marker detection yourself.

 

It may have seemed to work in the past like this but simply by chance. Either you did something else in LabVIEW between the sending of the command and the receiving of the answer so that the device on the other side got enough time to send everything in the meantime, the network is now more congested so it takes longer for the answer to arrive in the incoming buffer. or your old software was running on a much slower machine so it didn't get around to call TCP Read in Immediate mode fast enough to show this problem.

Rolf Kalbermatter
My Blog
Message 2 of 11
(4,643 Views)

"No NI-VISA involved at all in this!" - I just supposed that VISA is involved in all hardware access. Sorry.

 

Immediate: it is/was the simplest option, also because the entire message is supposed to be transferred in one packet. This is what we can't understand. Why would the VI suddenly stop reading bytes from a buffer which contains a full packet? 

The basic problem is that the TCP Read requires to define a number of bytes to read. If it would simply return all the bytes in the packet(s), which are counted in the packet header, then it would be easier. I guess that's how Docklight does it.

From what I see, mode Immediate works differently than what's described. It doesn't wait for the 512 bytes, because the device doesn't send 512.

 

The solution would be to define a shorter timeout for errors, compute the expected length and use mode Standard. This would at least always work with expected messages, but not with error messages which are shorter. There it would run into a timeout.

0 Kudos
Message 3 of 11
(4,616 Views)

@MaSta wrote:

Immediate: it is/was the simplest option, also because the entire message is supposed to be transferred in one packet. This is what we can't understand. Why would the VI suddenly stop reading bytes from a buffer which contains a full packet? 


That's not how TCP/IP is supposed to work. It is specifically stream oriented and not message oriented like UDP. There is no guarantee at all that a "package" may not be split into multiple smaller packages by any intermediate network infrastructure and even be transfered over a completely different link than the rest for TCP/IP. There is also no guarantee that your sender is not sending the message in two writes and if those two writes are combined into a single package or sent seperately will only depend on things like if the Nagle algorithme is enabled or not on the sender side.

TCP/IP specifically makes absolutely no guarantees that message boundaries are signaled to the receiver in any way nor even maintained in the processing of packages. The only very strong guarantee it makes is that the bytes will arrive exactly in the same order as they were sent, or not at all.

 

Accordingly the TCP/IP socket driver is NOT required to wait placing data into the incoming socket buffer until a complete message has been received, since there is no such thing as a complete message in TCP/IP as far as TCP/IP is concerned. There is a chance that the Winsock driver in the past was less quick in putting data into the incoming socket buffer by doing some more internal buffering but that is totally outside of the control of LabVIEW. Immediate mode does what it says, it returns as soon as it sees ANY data in the incoming buffer as there is nothing like a TCP/IP related end of package identifier. So the cause of your problem may be rather a Windows update than anything in LabVIEW, but the real culprit is a false assumption about how TCP/IP is meant to work.

Rolf Kalbermatter
My Blog
Message 4 of 11
(4,604 Views)

I understand. We already thought that something in the system might have changed, but tools outside of LabView don't have that problem.

How does Docklight know when the message is complete, so it can put it into the log?

 

I made a test and placed a second TCP Read. It would deliver the missing bytes the first TCP Read didn't deliver. So the solution would be to keep using mode Immediate, but calculate the expected length of message and only if the first TCP Read returned a truncated messsage, run the second TCP Read to read the rest and combine. This would also allow us to read ModBus error messages without running into a timeout.

0 Kudos
Message 5 of 11
(4,569 Views)

I have no idea how Docklight works but I would think that it does not care about messages at all when operating on TCP/IP. It simply reads whatever is available in the buffer and puts it in its log. It's probably more akin a terminal application that leaves the interpretation of the data stream to the viewer rather than trying to make anything out of it.

Or it works like Wireshark which does not operate on the Windows socket library but rather captures the incoming data frames through its own capture driver directly from the network driver itself and then interpretes the lowest level byte stream itself to identify the various network protocol layers. Its the only way to reliable analyze non-standard IP protocols outside of the TCP/IP and UDP category including lower level protocols based on IP such as ICMP, DNS and others from an application level. Yes Winsock and all the other Berkeley based socket implementationt allow also for a raw socket access that delivers the actual IP frames but that is on all relevant OSes out there since a long time restricted to elevated processes only, as it can be abused to spoof, intercept and spam network connections in a bad way.

 

As to solving your issue you should think about the actual implementation. Immediate mode may have seemed convinient but it is in most cases not the right choice if you are concerned about underlaying protocol message boundaries at all. The ideal is of course that the protocol in question provides a clear way to indicate such boundaries itself either through fixed sized messages or at least headers that indicate the exact length of the remaining variable sized data or in the case of text based protocols through a specifically designated termination character (sequence). In absence of both of these characteristics you can talk about a badly designed protocol.

 

One important thing you need to start to consider is that a timeout error in communication handling, be it through the LabVIEW network read nodes or the VISA Read is not generally an error but simply an indication that there is no more data to return AT THE MOMENT the relevant timeout for the connection occurred. It could be an error if your protocol requires more data to be present or you may just have to read another time to get the remaining data. In absence of clear protocol attributes like expected message size or termination character (sequence) you have to start thinking smart to still solve it in some ways. Yes the protocol is crap in such a situation but real life sometimes requires to handle crap. One possibility is to try to read with a short timeout value until there is no more data returned and then throw away the timeout indication as expected result. It's not foolproof but fairly reliable since the underlaying internet connection may have been interrupted halfway through the message and the network infrastructure has decided to transfer the remaining data over a slow satellite link that happens to go two times around the world too and therefore arrive only several seconds later.

Rolf Kalbermatter
My Blog
Message 6 of 11
(4,540 Views)

@MaSta wrote:

I understand. We already thought that something in the system might have changed, but tools outside of LabView don't have that problem.

How does Docklight know when the message is complete, so it can put it into the log?

 


You've got to remember that LabVIEW is a programming language. As such it will do what you tell it to do. Docklight knows when the message is complete because the programmers designed the smarts in the program to recognize when the message is complete.

 


@MaSta wrote:

 

I made a test and placed a second TCP Read. It would deliver the missing bytes the first TCP Read didn't deliver. So the solution would be to keep using mode Immediate, but calculate the expected length of message and only if the first TCP Read returned a truncated messsage, run the second TCP Read to read the rest and combine. This would also allow us to read ModBus error messages without running into a timeout.


Why do you need the expected length? There are two possible ways to read all of the data without knowing the expected length depending upon the characteristics of the data.

 

1. Read until a termination character is obtained. This obviously only works if the device returns messages with a termination character (many do). 

 

2. Read and append until no more data is available (you can then ignore the error). I would use a short timeout here so that you're not waiting to long to process the data.

 

 

Message 7 of 11
(4,538 Views)

Thanks.

1. ModBus protocol has no termination character. It's not normal to add one just for Ethernet. The problem doesn't exist when using USB.

2. This is what I actually did now. If the first TCP Read hasn't returned all bytes, then at one more run is done. In the end I will have all bytes. But if an error messages comes (always 5 bytes), which I can't clearly identify as such because the 5 bytes could also be a portion of the expected message, then I would also repeat TCP Read once and run into a timeout. This would slow down communication very much. But it seems we have to live with it.

0 Kudos
Message 8 of 11
(4,510 Views)

@MaSta wrote:

 

2. This is what I actually did now. If the first TCP Read hasn't returned all bytes, then at one more run is done. In the end I will have all bytes. But if an error messages comes (always 5 bytes), which I can't clearly identify as such because the 5 bytes could also be a portion of the expected message, then I would also repeat TCP Read once and run into a timeout. This would slow down communication very much. But it seems we have to live with it.


That sounds like you do know or can determine how many bytes should arrive. Why not abandone the Immediate read then and simply request the expected number of bytes. Or an alternate approach if the expected message is always at least 5 bytes, first always read 5 bytes, determine if it is an error message or part of your expected answer and then in the second case read the remaining bytes that you need. You say there is no way to distinguish if the received data is an error message or part of a real message? I would find that hard to believe. Such a protocol would be in no way possible to reliably receive.

Rolf Kalbermatter
My Blog
0 Kudos
Message 9 of 11
(4,506 Views)
Solution
Accepted by rolfk

You are correct, Modbus does not have a termination character. However the protocol is well defined and you can determine the number of bytes you need to read. My suggestion to you would be to replace your code and use the NI Modbus library to handle the communication with the device. It actually implements the complete protocol and knows how to read the messages. You can get the library from the tools network.

 

If you have to use your code, change your read method. When I need to read messages of unknown length I will read a single byte using a longer timeout. This covers the case when no data is actually sent. Once I read a single character, I continue reading chunks of data with a very short timeout. I continue to read until I actually get a timeout. This generally means that I have read the complete message. There is no guarantee but it it far better than using red immediate. You need to determine what makes sense for the short timeout. You don't want it too long so you cause delays in your application yet it shouldn't be so short that you stop reading too soon. I find that something between 50 and 100 ms usually works.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 10 of 11
(4,494 Views)