This widget could not be displayed.

From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

UDP read timing out

Solved!
Go to solution

I'm pretty sure this is entirely up to the operating system.  I can't find anything on MSDN about a socket option that would control whether to receive packets with invalid checksums.  Some searching on the web turned up one blog post (http://blog.southparksystems.com/2008/01/udp-checksums-in-embedded-linux-and.html) saying that Windows XP drops UDP packets with empty checksums, but I can't find any other source confirming or denying that in a quick search.

 

By the way, about the routing... it's not really polling until it finds a destination; Windows maintains a routing table, using information about the network adapters, netmask values, and gateways, that tells it where to send a packet based on its destination address.  I don't know much about the details of routing, but to get started, take a look at the "route" command at a Windows command-line.

Message 21 of 29
(3,297 Views)

Good link. Not good for us if it's true. I'll see if I can find a way to test this.

 

Funny, because everything I found claimed that it was acceptable to do that.

 

Thanks

Josh
Software is never really finished, it's just an acceptable level of broken
0 Kudos
Message 22 of 29
(3,290 Views)
Solution
Accepted by topic author JW-JnJ

Ok, solved. It was a unit problem, but I'll share the experience just in case someone finds it useful.

 

First off. Windows WILL receive UDP checksums of 0x0000 (disabled).

 

The real problem was the packet length in the IPv4 header was wrong. Funny enough, wireshark does not flag this and windows silently drops it. If the UDP length in the header is wrong, wireshark will flag it.

 

For people delving this deep into UDP packets, I used this freeware packet builder.

 

Thanks for the help. Kudos.

 

Josh
Software is never really finished, it's just an acceptable level of broken
Message 23 of 29
(3,273 Views)

How did you change the packet length of IPV4 header? Could you please shed some light onto this

Thanks

Austin

0 Kudos
Message 24 of 29
(3,245 Views)

 

I don't know a lot about wireshark, but I assume it's hooking into the network stack at some much lower level and reading raw packets before the operating system has a chance to do anything with them.  Your LabVIEW and Ruby programs are both relying on the operating system to pass packets to them, so if the operating system sees the packets as invalid and drops them, you'll never get them regardless of programming language.


Actually Wireshark uses the PCAP driver which hooks the network driver directly before the OS network stack sees any data. So Wireshark will report any data that comes through the network card drivers (except some local traffic on some Windows versions, as that is routed in the network stack directly without going through an actual software loopback device.

 

 


Some LabVIEW specific questions, if the UDP Checksum (not the IPv4, or ethernet) is invalid. Will LabVIEW report it with a read?

 

If the checksum is 0x0000, will LabVIEW follow UDP protocol and ignore the checksum?


 

LabVIEW does absolutely no interpretation of its own here. It simply calls the Winsock API and passes whatever data it gets (or doesn't get) to your program.

Rolf Kalbermatter
My Blog
Message 25 of 29
(3,239 Views)

@AustinCann wrote:

How did you change the packet length of IPV4 header? Could you please shed some light onto this

Thanks

Austin


It looks like they have their own device they need to test, which had a firmware bug and created invalid IP frames. So their (inhouse) programmer probably fixed the firmware of the device. If you are asking about how to fix the IP frame from within LabVIEW: you don't. LabVIEW does not give you access to either the IP, UDP or TCP frames at all.

Rolf Kalbermatter
My Blog
0 Kudos
Message 26 of 29
(3,235 Views)

@rolfk wrote:

@AustinCann wrote:

How did you change the packet length of IPV4 header? Could you please shed some light onto this

Thanks

Austin


It looks like they have their own device they need to test, which had a firmware bug and created invalid IP frames. So their (inhouse) programmer probably fixed the firmware of the device. If you are asking about how to fix the IP frame from within LabVIEW: you don't. LabVIEW does not give you access to either the IP, UDP or TCP frames at all.


Correct. We had an FPGA building and sending the packets. They had the wrong algorithm for calculting the IPv4 length.

 

The only thing I found to send a raw packet was the packet builder I linked above, which lets you tinker with nearly anything. Everything else sends through the windows layer which builds the packets automatically.

 

So really, this thread is just a warning for people using embeded ethernet with an unproven library. A wrong IPv4 length can lead to hard to find troubles. I'm sure many other things can cause Windows to silently drop the packet too.

Josh
Software is never really finished, it's just an acceptable level of broken
0 Kudos
Message 27 of 29
(3,230 Views)


I seem to be experiencing one of the "other things that can cause Windows to silently drop the packet".

 

I'm using UDP to send commands and responses between two devices.  Sometimes ~(1 out of 100) the UDP Read on the Windows device running LV 2012 timesout.  Wireshark on the same Windows machine sees the packet the other device transmitted. I've examined the Wireshark output and I don't see any differences between a "received" packet and the "dropped" packet that could explain the problem.

 

Any thoughts?

 

0 Kudos
Message 28 of 29
(2,926 Views)

Windows has a default UDP buffer size that operates as a FIFO.

 

If you receive packets at a high enough rate, Windows will overwrite the oldest packets in the buffer if you don't service the socket fast enough. 

 

You can configure windows to use a larger default size for all UDP sockets, or you can use this technique I used long ago to increase a specific LabVIEW UDP reference's buffer size.

 

http://forums.ni.com/t5/LabVIEW/Change-UDP-Socket-ReceiveBufferSize-under-Windows/td-p/483098


Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
If you don't hate time zones, you're not a real programmer.

"You are what you don't automate"
Inplaceness is synonymous with insidiousness

0 Kudos
Message 29 of 29
(2,920 Views)