LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

RT TCP Write sometimes does not

I am using a cRIO-9066 to send information via TCP to a Windows application written in C#. I use the simple TCP Write to do this. I typically send about 13,000 to 15,000 bytes in a single call (which does result in about ten data packets per call). Randomly, about every tenth call, the TCP Write executes normally with no errors and no timeout, but does not send anything. I have added code to check for the number of bytes sent and retry if not sent, but it made no difference. Every once in awhile, nothing gets sent. I have verified this by using WireShark to monitor the TCP line on the Windows side. It appears to be timing related, as I normally send the data in pairs, with the pairs separated in time from each other by about 0.1s and the pairs separated from the next pair by about 1s. The error almost always occurs on the second of a pair. The Windows application fully receives the data from any request before requesting another.

 

At this point, I have couple of options:

  1. Make the packet size smaller. The code is already in place to do this on both sides of the communications stack, but it will make my communications less efficient.
  2. Add a minimum time delay between the send of two packets. This would also make the communications less efficient.

Development environment is a full 2017 SP1 golden stack for LabVIEW, LabVIEW RT, LabVIEW FPGA and cRIO.

 

Has anyone seen anything like this? If so, did you solve it? How?

0 Kudos
Message 1 of 8
(2,620 Views)

Rather than use simple TCP for my LabVIEW RT Project, I used Network Streams.  My data rates are comparable to yours -- I'm sending 16-24 channels of I16 data at 1KHz, sending 50 samples at a time (or 20 times a second).  One of the channels being sent is a "clock" channel to ensure I don't lose any packets, and they all come through fine.

 

Bob Schor

0 Kudos
Message 2 of 8
(2,595 Views)

@Bob_Schor wrote:

Rather than use simple TCP for my LabVIEW RT Project, I used Network Streams.


Those are not compatible with a C# application.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 3 of 8
(2,591 Views)

I tried both my options. Increasing the delay from 0.1s to 0.2s helped, but did not eliminate the problem. Going to 0.3s did not help any more. Dropping the buffer size to below the maximum TCP packet size (about 1460 bytes) seems to have solved most of the problem. I will keep this thread updated when I am satisfied the problem is truly worked around. Solving it will require some NI participation.

0 Kudos
Message 4 of 8
(2,582 Views)

Check for the Nagel Algorithm having an effect. It when doing its thing will delay sending packets in the hopes that more data will be coming soon.

 

I believe there was a way to shut it off on RT.

 

Adjusting your packet sizes to be exactly the size the network will transport may also help avoid that problem.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 5 of 8
(2,577 Views)

Ben, I thought about the Nagle algorithm and did not think it would be an issue, but on further reflection, it easily could be, and would explain the inconsistent behavior. If my last packet is small enough, it may never be sent, which would cause the entire message to never be sent. Unfortunately, the ZIP file in the above link appears to be corrupt. I tried several unzip packages and then opened in a text editor. The file appears to be a GIF. If you have a usable copy of the code, I would appreciate trying it.

0 Kudos
Message 6 of 8
(2,571 Views)

 Further research shows that the Nagle algorithms and/or slow ACK are probably causing my issues. I say probably because I still have not managed to figure out how to enable TCP_NODELAY (disable Nagle algorithm) on one of the new Zynq boards (I use cRIO-9066 and sbRIO-9627). I know what C subroutine to use (setsockopt(...)) and how to get the socket ID (TCP Get Raw Net Object.vi in <vi.lib>\utility\tcp.llb). However, I don't know what SO setsockopt() resides in, so I can't call it. Anyone know?

 

Note that you can also set a global TCP_LOW_LATENCY by echoing a 1 to the appropriate directory location. I tried this using the command line VI and got a security violation. I could not get sudo to work with it. (sudo -u admin -s echo 1 > /proc/sys/net/ipv4/tcp_low_latency with admin password entered in stdin).

Message 7 of 8
(2,538 Views)

In the final analysis, it was a combination of Nagle's algorithm and another handshaking bug (my fault) that were causing my issues. There was not a LabVIEW bug. However, if you are trying to use TCP to stream large data sets, you probably want to turn Nagle's algorithm off. You can find code to do so here (the code link now works).

0 Kudos
Message 8 of 8
(2,509 Views)