LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

UDP Error 61 on cRIO - but controller memory usage is <~70%

Solved!
Go to solution

Hi Mark,

 

I'm trying to look at your code but I can't run it because i'm missing a few files. 

  1. 9213 Convert to Temp (Calibrated)
  2. 9213 Convert to Temp
  3. Thermocouple type.ctl
  4. Bank.ctl

When you run highlight executtion where does thie error occur ? If you could pinpoint where you are getting the error it will speed up the troubleshooting process. 

 

Kind Regards,
Lucas
Applications Engineer
National Instruments
0 Kudos
Message 11 of 19
(1,140 Views)

Hi Lucas

Apologies for the missing elements: the vis are part of standard libraries installed with the cRIO drivers. The control was simply forgotten. I have attached all.

I'm not familiar with using real-time execution trace (i've never used it) and unfortunately simply using highlight execution isn't suitable - the FPGA DMA FIFO will overflow before the first read even occurs, so no UDP transmissions will take place. However it's easy enough to establish the source of the error - by tracing the error wires it's coming from the UDP write function in the UDP transmit sub-vi. There appears to be nothing notably different about the inputs to the function or state of the vi when it fails, as far as I can see. The references appear valid, the string is of standard length, there are no errors on input.

 

Best regards,

Marcus

0 Kudos
Message 12 of 19
(1,129 Views)

HI Mark,

 

I've been looking at your code and comparing it to the example UDP Multicast project. I noticed a few differences that might be worth looking at

  1. You use a timeout of 1000 on the UDP Write function whereas they use the default which is 25000.
  2. You use a time to live of 1 rather than 2
  3. When you initialise the UDP connection you decrement the port number ? 
  4. If an error is passed into your UDP transmit you do not close the UDP connection. 

Let me know if changing any of these fixes the problem. 

 

Kind Regards,
Lucas
Applications Engineer
National Instruments
0 Kudos
Message 13 of 19
(1,105 Views)

Hi again Lucas,

I've had another play around with my cRIO today.

 

1. I increased the timeout on the UDP write from 1000 to 2000, then 25000 mainly to see if it had any effect. I couldn't notice any difference. To be honest even 1000ms seems rather too generous considering my data is being acquired at ~10us on 8 channels. Even if the UDP transmit blocks for just 1000ms, that would build up a 800,000 sample backlog!

2. I think I probably set the TTL to 2 at an earlier stage when I may have had a router between the cRIO and development PC. Since the setup now, and in the final application, will both be on the same subnet I've reduced it to 1. Thanks for spotting that! Alas, no change in errors.

3. This was a copy from the example 'UDP Multicast Sender.vi' which opens the 'UDP Multicast Write-Only Open.vi' with a local port of 58431, then in the UDP transmit writes to port 58432. I don't understand the reason for the difference myself, but since it was there, I copied it. ¯\_(ツ)_/¯

4. Another good spot. Error handling is not a forte of mine. Corrected.

 

Unfortunately even with these changes, I'm still persistently getting the error. I've noticed that most often both transmit functions from my two fast loops return errors when the system fails. Occasionally it's only one - AI1 or AI2, but most often both. It seems something is happening that's affecting both streams.

Another thing I've noticed (this information may not be useful but hey): there seems to some element of synchronisation going on. Sample rates that are multiples of 10 - 10us, 100us, run for far longer before failing than e.g 12 (12us, 120us).

0 Kudos
Message 14 of 19
(1,099 Views)

HI Mark,

 

I would recommended makign a copy and simplifying it as much as possible (only having one fast loop) for instance. Have you considered using a third party tool such as wireshark to analyze the packets ? 

 

Kind Regards,
Lucas
Applications Engineer
National Instruments
0 Kudos
Message 15 of 19
(1,080 Views)

Hi Lucas,

I'll recompile new FPGA code that only samples on one bank so that I can completely eliminate the other two - without modifying the FPGA code I need to at least run the read loops just to stop the DMA FIFOs overflowing. I've already tried that by using diagram disable structures to eliminate the TC & AI2 convert/transmit subvi's - even transmitting just one channel gave errors.

I'll also try running some code that just generates random strings and transmits them by UDP at a roughly equivalent rate, and see how the UDP functions cope.  If it's solely a UDP issue I may be able to reproduce it with minimal controller code and no FPGA stuff. On the other hand, if it's really a controller memory related issue, cutting code out may make the problem 'disappear', just by freeing up more memory.

One idea that could be a possibility is that the UDP transmit loops are being run consecutively with too little wait between (does the function exit before the transmission is complete?) - perhaps order of code execution in my original non-subiv'd version introduced effective delays. I'll try putting some waits between my transmit functions, or perhaps putting them in a timed loop, and see if that makes a difference.

 

I hadn't considered wireshark.. if packets are getting corrupted during transmission that would cause problems with the read, not the write though. I was going to say my system is write-only, but I suppose labview itself must have some communications going on in the background to show things like the controller front panel.

In fact it would be nice to know more about how labview's internal communications work, since essentially it's already doing what I want - transmitting the read data to my PC over ethernet!! The controller must be under twice the necessary load!! 0_o

 

 

0 Kudos
Message 16 of 19
(1,063 Views)
Solution
Accepted by marc1uk

Hello!

Apologies for the delay in response; this thread was not dead, just sleeping.

I've put a wait in my UDP transmit loop and my error 61 has gone! Even a wait of just 2us seems to completely clear the problem - in fact changing between 2us and 200us doesn't seem to change the rate at which the UDP queue dispenses, so transmission still seems to be going at the same rate. What this therefore changes is a bit of a mystery - maybe all the UDP transmit function does is place it on an internal queue, and 2us is all it needs to reallocate memory in the internal queue?

Whatever the case, it's something I may investigate and post back on later, but unfortunately due to workload at the moment I don't have the time right now. For now, my code is running, so I'm considering this problem solved! Smiley LOL

 

My program runs continuously at a steady 14us loop rate / 71kHz, limited it seems by the UDP transmission rate. That's some way off the 1MHz of the DAQ card, but I'd probably hit a few more barriers between now and then. Perhaps some data compression mechanisms are in order, but that's another thread. 🙂

 

Many thanks Lucas for your ideas, support and guidance.

0 Kudos
Message 17 of 19
(1,042 Views)

Hi Mark, 

 

Sorry the late reply I must have missed your last reply. I'm glad that you have managed to solve your problem. I believe that you may been filling up the internal queue on the cRIO which caused the error. 

 

 

Kind Regards,
Lucas
Applications Engineer
National Instruments
0 Kudos
Message 18 of 19
(1,031 Views)

Hi Lucas,

I'm still not entirely convinced this fits the symptoms; the number of elements in my UDP string queue seems to deplete at roughly the same rate regardless of the wait time (between 2 and 200us), which suggests the dequeue & transmit loop rate isn't even being determined by the enforced wait  - something else (the UDP transmit function - it's all there is) must be setting the loop rate. I mean, if a wait of 2us is sufficient to make a difference, the transmit loop must run at <2us without it, at which rate the internal queue overflows. But if the loop rate IS being set by the wait, increasing my wait time by 100 I should see a 100x decrease in the rate at which my UDP string queue depletes - but I don't...

 

In that case, what's the wait even doing, if it's not slowing down the loop rate?? It seems to be something purely about having the wait there...

 

Well, a mystery for the labview theoreticians I guess.

Thanks again

0 Kudos
Message 19 of 19
(1,015 Views)