LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Missing UDP packets

I'm trying to use LabVIEW 7.1 on WinXP to control an in-house sensor I've developed, and I'm using UDP for data transfer and communications.  I periodically call for a sample of data from the sensor, and it sends back about 130 UDP packets that are about 300 bytes in size.  These packets come in very quick succession (10-20 microseconds apart, I believe), and I'm having trouble picking up all of the packets with LV. 

Using a port-sniffer, I've confirmed that all of the packets are indeed reaching the computer and are intact, but for some reason LV doesn't get them all.  I was under the impression that these packets would reach the network card and sit in a buffer until I can pull them off with LV.  In my software, LV checks for UDP packets about every ms (though there are intermittent delays up to 15 ms when LV goes off to perform other functions).  This still doesn't seem to be fast enough to ensure that I get all of the packets before they are removed from the buffer.  I can only think of a few things that may be happening:

1.  The packets reach the network card and decay (for lack of a better word) in just a few ms before I can read them with LV.
2.  The network card has a small buffer (less than 100 packets), and this often fills before LV can pull packets off to free up space. 

I wouldn't have thought either of these to be the case, but I don't know much about networking, and I'm no pro at LV either. Smiley Sad  Any help is much appreciated--especially any lower-level understanding into how LV and UDP networking actually functions.

Regards,
Tom
0 Kudos
Message 1 of 6
(5,876 Views)
The most likely scenario is that you are loosing packets of data - I quote from the following article which you may like to read
http://zone.ni.com/devzone%5Cconceptd.nsf/webmain/BA7F1D7CE009BE7686256A5B004F335D

"UDP provides simple, low-level communication among processes on computers. Processes communicate by sending datagrams to a destination computer or port. A port is the location where you send data. IP handles the computer-to-computer delivery. After the datagram reaches the destination computer, UDP moves the datagram to its destination port. If the destination port is not open, UDP discards the datagram. UDP shares the same delivery problems of IP.

Use UDP in applications in which reliability is not critical. For example, an application might transmit informative data to a destination frequently enough that a few lost segments of data are not problematic."


Switch to TCP if you need to be sure of receiving the data. again to quote -
"TCP ensures reliable transmission across networks, delivering data in sequence without errors, loss, or duplication. TCP retransmits the datagram until it receives an acknowledgment."

"
Deciding between TCP and UDP
TCP is the best protocol to use if you want reliable data transmission. UDP is a connectionless protocol with higher performance, but it does not ensure reliable data transmission.
"

There are examples available :-
"TCP and UDP Examples
Refer to the labview\examples\comm\TCP.llb and the labview\examples\comm\UDP.llb for examples of using the TCP and UDP VIs and functions."



Message 2 of 6
(5,860 Views)
Thanks for your prompt reply.  I have read that article and agree that TCP would be nice; however, in this case, I'm not free to use TCP since the sensor's protocol is already set.  The decision for UDP was made mainly for speed, and I didn't have a choice in the matter.  Smiley Mad

I'm sure that I'm losing packets, but my question is why?  Is it because of speed or something else?  As I mentioned, I'm using a protocol analyzer to ensure that the packets are safely making it to the network card, and indeed they are.  However, somewhere between the network card and my LV code, I'm losing them.  Is it because the buffer fills or because the packets expire?  Or is it some other reason.  From that article you refered me to, it seems that all of the packets will simply sit in the buffer for each port until I pull them off, but that's not happening for some reason.

Sorry if I'm not very clear in my explanations; I'm not completely familiar with all of the networking jargon.

Tom
0 Kudos
Message 3 of 6
(5,857 Views)
I have a device the sends sequentially numbered UDP packets of about 150 bytes at a rate of up to about 1Mb/s. My LabVIEW code receives those. I have never lost a packet (the program runs for months).
 
Do you have more details on your program?
 
In my case, I have a tiny, very tight independent loop that only contains a UDP read with infinite timeout and a LV2 global style FIFO buffer (size ~1024)  where it adds the newly arived packets. Elsewhere in the code the FIFO buffer is read and cleared at a more leisurely pace. I monitor the max buffer use and the highest was 287 entries in recent history (Athlon XP 2500).
 
Make sure that there are no other CPU hogs in your code. For example a single empty loop without a timeout can grab all CPU resources for a significant anmount of time, starving your other processes. What is your networking hardware? PCI or e.g. a USB ethernet adapter?
What else is running on your computer?
Message 4 of 6
(5,846 Views)
Well you don't say how much data that you are expecting to receive, one guess is that you could be over running the buffer, which if on a Windows Machine is set up probably for a 10Mbs connection even if the connection is 100Mbps. It could be that your application is taking too long to get around to dealing with the data.

You may need to get to know a lot more about UDP so try the following, they are informative and interesting:-
http://www.microsoft.com/technet/itsolutions/network/deploy/depovg/tcpip2k.asp
http://rdweb.cns.vt.edu/public/notes/win2k-tcpip.htm

Basically you can increase the buffer size on the machine to accomodate more data.

I hope this is the root cause.
Message 5 of 6
(5,842 Views)
Both of these last two responses were very helpful.  I'm in the process of implementing both solutions.  Here are responses to both posts:
-------
Altenbach,

My program is structured somewhat like yours.  I've got the UDP read running in a loop of its own, and it is deciding who sent the packet (I'm running a set of 8 sensors--each on its own UDP port) by the IP address and then putting the packet into the corresponding queue (of an array of 8 queues--one per sensor).  Packets are pulled off of these queues from another loop and processed. 

Unfortunately, I have a few other loops running at the same time.  One to handle the UI, one to handle commands and processes, one to process incoming packets, and one to perform the UDP read.  I have a feeling that this may not be the most elegant solution, but I'm not too much of a programmer, and it's the best I could come up with.  I've checked for other CPU hogs among these loops, and I can't find any right now--I've tried to put generous delays in my other loops to minimize their CPU usage; that's not to say I haven't missed anything, though.

I'm going to try to implement this loop with the LV2 style globals rather than queues, and then I'm going to move ALL of the processing (including deciding which sensor the packet is from) out of the UDP read loop and into the packet processing loop.  I think this will be a large step towards solving my problem.  Thanks a lot.

In response to the rest of your post, my hardware is an PCI NIC in a rackmount computer.  The computer's pretty fast, but there are two NICs--one to communicate with sensors and another to communicate with the outside world.  Having two NICs may be another source of my problem--I don't know for sure.
-------
help,

Thanks very much for the useful links about TCP/UDP in Windows.  I'm going to attempt to increase the buffer size to see if this helps as well.  I've been looking for articles just like these to help me understand what's going on.  This will help me greatly as I continue to improve my software.
-----

Thanks,
Tom
0 Kudos
Message 6 of 6
(5,826 Views)