From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

VI shuts off at 4.75 hours, like clockwork

I've clearly done something wrong but don't have the knowledge to identify what it is. I'm guessing it has something to do with bringing in data from both a DAQ and VISA but not sure. Any comments would be much appreciated.

Thanks.

0 Kudos
Message 1 of 11
(2,842 Views)

Hi Geologian,

 

do you get any errors in your DAQ (producer) loop?

 

Have you tried to decouple those different devices by reading each device in its own loop? This way you will prevent stalls when one device is slower than the other…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 11
(2,829 Views)

I haven't tried that...I'll give it a go. Any thoughts on why this is happening to avoid it in the future?
Thanks,
Peter

 

0 Kudos
Message 3 of 11
(2,816 Views)

Hi Peter,

 

my guess is: one of your device gives an error and the producer stops…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 4 of 11
(2,801 Views)

and I will guess that Power Saving options are killing a comm. port

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 11
(2,793 Views)

Ben, can you please explain? Thanks.

0 Kudos
Message 6 of 11
(2,789 Views)

Expanding on GerdW's comments:

 

The code has a number of built-in timing dependencies and constraints among devices and queues.  Specifically:

 

1. The uppermost loop is where the problems start.  You have a DAQmx task sampling (seemingly) at 10 Hz where you're reading 1 sample at a time.  To keep up with the latest sample, you need all the other loop code to execute in 100 msec or less.  When you first lag behind, you'll simply be feeding OLD, STALE data to the waveform queue.  If you keep lagging behind long enough, the DAQmx task buffer will overflow, produce an error, and stop your loop.

 

2. So why might your loop lag behind?  Well, that's easy.  You're also doing several interactions with a couple serial devices in the same loop, including reads that need to wait for serial data to arrive.  I have no idea how long your serial devices require, but will give you a little reckless speculation anyway.  The fact that you have this magic, consistent timing interval of 4.75 hours until failure suggests something very small and systematic that builds up over a long time.

    I'll go out on a limb and guess that the Scale is *allegedly* delivering readings over serial at 10 Hz.  And that's why you set up your DAQmx task at 10 Hz sampling, 1 sample per loop iteration.  However, the Scale's timekeeping mechanism WILL NOT exactly match that of your DAQmx task.  I'm not at all sure this is the *main* problem, but it definitely is *a* problem.

 

3. There are also serial interactions with your Bromkhorst device.  Data from both this and the scale are combined together to be sent elsewhere via queue.

 

4. You'll need to at least separate the DAQmx task into its own loop.  It's probably a good idea to separate the serial devices into their own loops too, but you *might* get away with keeping them together in one loop.

    However, there's more trouble downstream...

 

5. Your middle loop continues this timing dependency because it dequeues both DAQmx data and serial data from the 2 queues.  This was ok so far because of the timing overconstraint discussed in #'s 1-3 above.  When you (wrongly) forced all devices to enqueue at the same rate, you could get away with forcing the dequeues to happen at the same rate.  Once you follow suggestion #4 above, you'll simply move your problem into the middle loop.

    The queues will start to fill at different rates, and emptying them at the same rate will leave a (growing) backlog in one of them.

 

6. The next step in the solution is for your serial devices to publish data to a Notifier rather than a Queue.  Then your middle loop should first dequeue DAQmx data, then "Wait on Notification" from the serial devices with a 0 timeout and "ignore previous" == False.

 

That's the main stuff I see at the moment.  Meanwhile, I'd highly recommend tidying up the wiring and finding strategic places to make more sub-vi's.  Don't do it for me, do it for *you*.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 7 of 11
(2,777 Views)

Windows ahs power saving options (since it is sooo cool to be green after all) that will automatically shut-off USB ports etc.

 

Somewhere under Control panel there is a power management set-up option.

 

You may also have to go to device manager and  check for power saving options there as well.

 

I am not a IT guy so I can only offer clues as to where to look.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 8 of 11
(2,775 Views)

Small addendum to my previous post:

 

The way I advised you to use Notifiers isn't substantially different from using global variables instead.  I typically don't use globals, but I recall that you already had several so what's 2 more?

 

You may also need to scheme up a more robust scheme for stopping all the distinct parallel loops.

 

Per Ben's remarks, Windows power-saving settings have been a root problem in lots of threads here, particularly with USB-connected devices (whether USB DAQ or USB-to-serial converters).  I considered it, but the sorta unusual 4.75 hour time limit put it lower on my suspect list.  I guess we'll see...

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 9 of 11
(2,763 Views)

Everyone else has good suggestions but as another thing that's out there, is it 4.75 hours to the minute and second exactly?  Or is it just close?

 

The reason I ask is that whenever a time amount is close to a binary number, it can sometimes mean an internal counter somewhere has overflowed, or a buffer somewhere has run out of memory.

 

2^24 milliseconds is 4.66 hours, and 2^14 seconds is 4.55 hours.  Those are only off from 4.75 hours by 5 and 12 minutes respectively.

Message 10 of 11
(2,527 Views)