From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP mess

Solved!
Go to solution

@jg69 wrote:

As far as I know that's a standard feature of TCP/IP (and not directly LabVIEW). The underlying TCP/IP driver will try to minimize the traffic and try to maximize the data content within one TCP/IP packet. If you want to ensure that even small messages are sent directly than it might be a good idea to turn on the Nagle algorithm for that connection. If you google for Nagle and LabVIEW you'll find how to do that.

 

Regards, Jens


I guess you meant to turn it off. The Nagle is what seems to merge packages. http://digital.ni.com/public.nsf/allkb/7EFCA5D83B59DFDC86256D60007F5839

I'll tried the suggested solution and it worked perfectly. No errors anymore with 50 ms timing.

0 Kudos
Message 11 of 21
(1,420 Views)

Disabling the Nagle algorithme is actually for another problem than what you see. Basically you are using a suboptimal communication protocol to be used over TCP/IP.

 

Aside from some very obscure protocols you have two basic types:

 

1) Character string based protocols: Not definining a termination character (sequence) is definitely asking for trouble here. All the popular text based network protocols, like FTP, HTTP, SMTP etc define a termination character to be used to indicate the end of a line/command/message. HTTP for instance uses CR/LF to seperate header lines and at least in HTTP 1.1 the use of a "Content-Length:" header that specifies the number of bytes for the body part to follow after the header is required. Separation of the Header from the body is indicated by two subsequent CR/LF termination indicators, without any other character in between.

 

2) Binary protocols: here unless you have a fixed size protocol, which is rare, you do need some kind of common header that specifies the length of the data section to follow.

 

The real problem with the Nagle algorithme is that if you have a protocol that uses small command-response sequences it will basically limit the speed you can go back and forth as the Nagle algorithme will buffer small outgoing messages for up to 200ms or until all combined messages amount to a few 100 bytes before an actual network message is send out. This is because each network packet has a minimum overhead for the various protocol headers and sending a packet for every few bytes that you may decide to send seperatly will overly congest the network wire.

 

If you add some termination character to the message and configure the instrument to actually terminate any command receive sequence on that termination character (almost all devices do that) you don't get garbled messages at the receiver side. Now what happens is that the device receives whatever is on the wire and after a certain timeout without more data the buffer so far is anyhow send to the command interpreter to be executed. If the device would see a correct termination character in the data stream, it would immediately send the command to the command interpreter!

Rolf Kalbermatter
My Blog
Message 12 of 21
(1,407 Views)

TCP is a stream, on which you can encode messages, but the IP packets that TCP chops the stream into are not the same as the messages you encode.   So one needs a method to define the message boundaries.

0 Kudos
Message 13 of 21
(1,396 Views)

Right, but Ethernet and TCP are common today for remote device control and SCPI, while SCPI is usually short in length, in order to have a fast communication. From my understanding, TCP would chop big packets into smaller ones. That's OK if they safely arrive their destination. But for small packets it's best to do nothing or to configure the TCP connection to not merge packets. This is done with the VI from the KB article, but this is not the usual way. Actually, I'd expect TCP to automatically find the best method. Using a termination character would help, yes.

 

But when our customers write applications to control many different devices from different manufacturers and they globally don't use termination characters, LabView can be problem. However, this is not my concern. They can append one if required. In my test I didn't think of that option. 

 

Thanks to all of you!

0 Kudos
Message 14 of 21
(1,371 Views)

I'm no sure how you come to the conclusion that LabVIEW can be the problem here. As far as TCP/IP is concerned LabVIEW simply makes use of the platform socket interface, just as every other application out there (unless the programmer decides to do raw packet handling through some other means, you can't prevent some people from trying to reinvent the wheel over and over again Smiley Very Happy).

Other than when you use directly the socket interface LabVIEW does in fact have extra features in the TCP Read function to support 4 different modes, which you would have to implement all yourself in most other environments.

And what is so hard about explicitly appending a CR, LF, or CR/LF to every command string you send to a device? That is what we were mostly talking about and if you would do that and your device is anything like a halfway professionally designed hardware piece it will simply understand that this character indicates the end of a command and will then execute it no matter how many more bytes might be already waiting in the incoming queue for that TCP/IP stream.

You seem to think that TCP/IP works like pipes which can (but don't have to) be set to work in message mode, where each read will receive a full message as send by the sender, but that is wrong. TCP/IP was designed and implemented to be a stream oriented transport and there are no inherent message boundaries in this stream as far as TCP/IP is concerned. It is the responsibility of the protocol on top of TCP/IP to make that happen if needed, and just about every protocol on top of TCP/IP which is not just an interactive shell command interface does exactly that. For shell command interfaces the operator on the console is the interpreter of the "messages" and he or she concludes from the formatting on the screen when input is expected and what the response from the remote device is. Or maybe you have UDP communication in mind, that also has more of a message oriented approach. When reading you either receive an entire message as send by the sender (or a portion thereof if you requested less bytes than the message contained) or nothing at all. If you didn't happen to read the entire message the rest of the message is lost in space forever and there is no means to read it later.

 

Both TCp/IP and UDP are designed like that and LabVIEW does nothing to change that.

Rolf Kalbermatter
My Blog
Message 15 of 21
(1,365 Views)

Also if you care to read the SCPI spec or simply the Wiki article about it you will see that the SCPI specification actually mentions that multiple commands can be send in one go IF and only if they are separated by a semicolon. However in "Volume 1, Appendix A Programming Tips" of the full SCPI specification there is a caveat that mixing program commands and query commands in such a way may NOT result in the desired behaviour since an instrument may choose to defer execution of program commands "until a program message terminator is received" so the resulting query result could be the one before the commands were executed.

SCPI itself doesn't specify the exact "program message termination" to be used since that depends on the underlaying transport used. For GPIB there is an explicit EOI (End or Identify) hardware handshake line that is required to be used in IEEE488.2 to indicate that an end of message has occurred but for other character stream based transports like serial communcation or TCP/IP, the use of <LF> is mandated through the IEEE488.2 standard which SCPI relies upon.  

Rolf Kalbermatter
My Blog
0 Kudos
Message 16 of 21
(1,355 Views)

Thanks, we are aware about command concatentation, but that wasn't the problem. It was rather that the commands were sent one by one with the TCP VIs, but bundled in one packet in (as it seems) arbitrary order and of course with the separator token. This isn't expected and when done the same way outside of LabView, for example in C#, it doesn't occur.

0 Kudos
Message 17 of 21
(1,295 Views)

@rolfk wrote:

I'm no sure how you come to the conclusion that LabVIEW can be the problem here. As far as TCP/IP is concerned LabVIEW simply makes use of the platform socket interface, just as every other application out there.


That's the point. It only happens when using LabView. In C# there is noch such problem.

 


@rolfk wrote:

 

And what is so hard about explicitly appending a CR, LF, or CR/LF to every command string you send to a device? That is what we were mostly talking about and if you would do that and your device is anything like a halfway professionally designed hardware piece it will simply understand that this character indicates the end of a command and will then execute it no matter how many more bytes might be already waiting in the incoming queue for that TCP/IP stream.


That's, however, not the point. The firmware (not the hardware) and its parser would have to strip the end token. If that's not planned, because there is no reason to use a termination character over Ethernet or USB, then it is wasted time to search the command string for these characters. The firmware in an ARM controller does many things and communication is not the priority.

 

Well, we think we solved the problem by deactivating the Nagle algorithm for certain applications.

0 Kudos
Message 18 of 21
(1,292 Views)

@MaSta wrote:

 Well, we think we solved the problem by deactivating the Nagle algorithm for certain applications.


I wouldn't call that a solution. It is a workaround for a broken down protocol and possibly even application.

The claim that the packets you send out were put into arbitrary order definitely doesn't hold out at all. That never happened to me nor in LabVIEW nor in any other application unless I did run multiple threads trying to send out packets over the same connection.

And that might be your problem here. LabVIEW is very happy to execute code in parallel if you don't enforce some serialization through data flow dependency. In C# and almost every other text based language you have to go out on your limb to get multiple threads doing things in parallel. Unless you explicitly create threads, everything happens full serial anyhow and then you won't see any such effects.

That means if things need to be executed in a certain order in LabVIEW you need to enforce that. The best way to do that is by creating proper subVIs and pass the error cluster and connection refnum in an out of each of them. That way you can easily chain them together. If you at some point find that you want to let LabVIEW execute two of the subVIs in parallel you still can do so by placing them besides each other instead of in a chain.

 

That the Nagle algorithme messes up your device communication definitely means that the protocol is basically broken. For local subnets where you have everything under control this doesn't really matter but it will break down as soon as you try to connect things over the bigger internet. Routers in between might decide to repackage the data frames in many different ways. The only thing the TCP/IP protocol guarantees for sure is that any data package arrives in order or not at all. You can not assume that the package length will be the same on the other end of the globe as the protocol is allowed to pack packeges into bigger frames or fragment them.

 

As to the extra effort of having to detect a specific termination character I don't buy that argument. Currently you have a command line parser that does the whole command parsing already anyhow. Adding a case that when it sees the actual termination character does send the actual command to the execution system even if the incoming byte stream hasn't reached the end would cost nothing in performance.

Rolf Kalbermatter
My Blog
Message 19 of 21
(1,284 Views)

A bit late, but I'll comment:

I don't know what a broken down protocol shall be, but I use LabView and LabView uses Windows drivers to access ports. If the driver messes up the packet by bundling multiple messages sent one after each to the open socket then I'd say this is not a LabView problem and not a broken protocol. It's something you simply need to be aware of. But no, nothing in parallel. Also no big network, no router, plain direct connection. Just sending different SCPI commands in short intervals (5 ms). Nagle bundles them in random order and number, and without a character terminating the single command strings the parser would fail to sort this out.

 

0 Kudos
Message 20 of 21
(1,034 Views)