Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx: strange behavior or sampling issue

Solved!
Go to solution

My client is seeing an unusual behavior.  Since I simulate their setup, I was not able to easily duplicate the issue.  The current implementation evolved from attempts to isolate and "fix" the issue.

 

I may need to enter several post to explain everything.  I inherited the program and do not know the past history and why things were configured the way they were.  The issue arose when bringing the remaining channels from Shared Variables to DAQmx Read function call.

Well.. there are some DAQmx Write..

 

The system consists of PXI Chassis with the following boards:

 

PXIe-4339                     Bridge inputs

PXIe-4353 (qty 7)          Thermocouples

PXIe-4300 (qty 6)          Analog inputs

PXI-6704                      Analog Outputs

PXIe-4340 (qty 2)         LVDTs

PXI-2566 (qty 2)           Relays

PXI-6529 (qty 2)           Digital Inputs

PXIe-4357                    RTDs

 

Nearly all channels of each card is being used.

 

To simplify life and implementation, tasks were created for most of the above. 

6 tasks for Analog Input reading

1 task for Bridge Inputs

1 task for Digital Inputs

1 Load Cell task

1 LVDT task

1 Analog output task

2 tasks for the relays

1 task for the RTDs

1 task to read all thermocouples.

 

The program must record the complete set of readings every 100 ms.  That should not be an issue since it takes approximately 35 ms for the PXI to read everything within a cycle.  In theory, and when it works for the first few seconds.  The simulated environment is slower, most because the development environment is within a Virtual Machine.

 

The various tasks listed above appear within 4 separate daemons divided as:

 

1. Analog inputs and thermocouple readings

 

2. All digital input / output signals

 

3. All relay control

 

4. All analog outputs (not that many)

 

 

The initial error message observed by the client was not very helpful with its message.  Plus it only occurred on their machine with the executable version.  I could not find any description but it seemed to stem from the PXI PC not being able to keep up with all the measurements.

 

I decided to implement a Rendez-Vous to make sure that all daemons completed their tasks before allowing to proceed with the next set of measurements.  The error message persisted.  I added Semaphores to force each daemon to execute one at a time.  No change with the error message.  I added a delay within the timed loop (as if it would help).  As expected no change.

 

Reducing the sampling rate from 1 kHz to 10 Hz  helped a little.  At least the program can take some measurements.  But another error complaining about the PC not being able to keep up with the acquisition.  I will try to get the exact error message,,,  I thought I had it when I started this thread.

 

The Acquisition Mode for most tasks is set to: 1 sample (HW Timed) at 10 Hz.  It is for:

Analog Inputs

Bridge Inputs

Load Cells

LVDTs

Temperatures (thermocouples)

RTDs

 

Digital Inputs, Relays & Analog Outputs are set to 1 Sample (On Demand) - as per NI MAX request.

 

With the problem still present, I added a DAQmx Timing function set for Sample Clock with a constant rate of 10.  No improvement.

 

I may or may not be able to show the code, so I will add the following.

 

I start the DAQmx task outside the timed loop and only close the task when it exits the timed loop.  Opening and closing a task at each iteration would / may take too much time as they may want to sample as fast as every 50 ms (that's 15 ms to spare at best).

 

The Timed Loop execute every 100 ms.

 

I'm not sure what to add other than a sample of the code..  I may implement a mock example that resembles what is implemented.  I will post later.

0 Kudos
Message 1 of 15
(726 Views)

Interesting behaviour.

 

Would it be possible to disable all but one task and run at a 100Hz loop rate? curious to see if even just 1 task streaming over PXI is causing the issue.

 

Anything changed on the software, any windows update, bios etc.,?

 

BTW, you didn't mention if you use a PXI embedded controller or use it over MXI?

-Santhosh
Semiconductor Validation & Production Test
Soliton Technologies
NI CLD, CTD
LabVIEW + TestStand + TestStand Semiconductor Module (2013 - 2020)
NI STS for Mixed signal and RF
Message 2 of 15
(723 Views)

A couple things make the ol' spidey senses tingle a bit.

 

1. Did several tasks start out in "Hardware-Timed Single Point" sampling mode?  IMO, that's *rarely* the right choice.  The main exception that comes to mind would be certain RT control loops where you need to be made aware of a loop timing failure *immediately*.

 

2. DAQmx Acquisition inside a Timed Loop also smells like a possible source of trouble.  I don't think I've ever put an input task inside a Timed Loop -- it's awfully easy to overconstrain the timing.  (Here's just one of several threads I've been in regarding such overconstraint.)  

    Further, if multiple input tasks with different sample rates are being serviced inside a single common loop, it's pretty easy for one of the tasks to be the "slow" one that causes other tasks to build up a backlog until you get an overflow error.   (-200279 I think).

 

3. The use of semaphores and rendezvous add another possible timing constraint and seem more likely to harm than to help.

 

Some of the timing requirements aren't real clear.  There was an initial 1000 Hz sample rate mentioned and also a 10 Hz logging rate.  I don't know your particular devices, but I know at least some of NI's bridge devices have fairly high minimum sample rates and some of their temperature devices have fairly low maximums.   Getting lots of constrained devices sync'ed up ranges from tricky to infeasible and other strategies need to be adopted.

 

One that comes to mind is to carefully separate tasks into multiple loops, one for each distinct sample rate that needs to be supported.  These loops could then push their data into some data manager.  

    Somewhere else there'd be a file logging loop controlled to iterate at 10 Hz, each time writing the most recent known values delivered by the data acq loops.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW?
Message 3 of 15
(703 Views)

Thanks Kevin for the suggestions.  I will respond within your comments / questions

 


@Kevin_Price wrote:

A couple things make the ol' spidey senses tingle a bit.

 

1. Did several tasks start out in "Hardware-Timed Single Point" sampling mode?  IMO, that's *rarely* the right choice.  The main exception that comes to mind would be certain RT control loops where you need to be made aware of a loop timing failure *immediately*.

No they didn't.  I think it was originally set to "Continuous" for the Shared Variables. I no longer recall why I changed it.  I may have the original NI MAX Configuration and will check it out.  I may try putting it back to continuous (if MAX does not complain).

 

2. DAQmx Acquisition inside a Timed Loop also smells like a possible source of trouble.  I don't think I've ever put an input task inside a Timed Loop -- it's awfully easy to overconstrain the timing.  (Here's just one of several threads I've been in regarding such overconstraint.)  

    Further, if multiple input tasks with different sample rates are being serviced inside a single common loop, it's pretty easy for one of the tasks to be the "slow" one that causes other tasks to build up a backlog until you get an overflow error.   (-200279 I think).

That is not the error that was obtained, but I will pay attention to it.  All tasks within a loop (or daemon) have the same sample rate.  I will add some code to verify if the overconstraint occurs in the executable version.

 

 

3. The use of semaphores and rendezvous add another possible timing constraint and seem more likely to harm than to help.

I agree...  I wanted to see if the PXI system was having difficulty with all the tasks.  It would surprise me if it did because other systems handled the multiple tasks quite well.  I do plan to remove the extra overhead once the bug is fixed.

 

Some of the timing requirements aren't real clear.  There was an initial 1000 Hz sample rate mentioned and also a 10 Hz logging rate.  I don't know your particular devices, but I know at least some of NI's bridge devices have fairly high minimum sample rates and some of their temperature devices have fairly low maximums.   Getting lots of constrained devices sync'ed up ranges from tricky to infeasible and other strategies need to be adopted.

This is what has been on my mind...  I did not have the mix of devices as with this particular system.  It's good to know about the wide variation with the sampling rate.  I am not at the point where I want to optimize the system.  I may try introducing one daemon at a time and see what happens within the overall system.  Basically spawn all daemons but place a rather large delay / space before the function gets into the loop.  It may point to something interesting.

 

One that comes to mind is to carefully separate tasks into multiple loops, one for each distinct sample rate that needs to be supported.  These loops could then push their data into some data manager.  

That is how it is currently implemented... with the error...

 

    Somewhere else there'd be a file logging loop controlled to iterate at 10 Hz, each time writing the most recent known values delivered by the data acq loops.

 

Yes. it is also implemented to collect and write to file after collecting the data from all daemon.

 

-Kevin P


I will keep you informed of the progress.

0 Kudos
Message 4 of 15
(694 Views)

@santo_13 wrote:

Interesting behaviour.

 

Would it be possible to disable all but one task and run at a 100Hz loop rate? curious to see if even just 1 task streaming over PXI is causing the issue.

I will implement a cascading delay with plenty of time to observe the behavior of each daemon.  The delay will be placed in front of the loop.  To micmic one task starting at a time and adding the next one...

 

Anything changed on the software, any windows update, bios etc.,?

 

According to the customer, there was a windoze update.  I doubt it is causing this particular issue.

 

BTW, you didn't mention if you use a PXI embedded controller or use it over MXI?


Embedded controller.  I would need to check the designation.

0 Kudos
Message 5 of 15
(692 Views)

Just one other tidbit to keep in mind:

 

The sample rate you *ask* for is not always the sample rate you *get*.   This is why most of the shipping examples query for the actual sample rate right after the call to configure it.  It might be good to add such a query to the tasks for troubleshooting purposes.

 

I can recall several threads where folks were using a device with a high minimum sample rate.  They *thought* they were configured for 100 Hz sampling, but the task would return 1000 samples to them in less than a second.  They didn't realize that DAQmx actually silently coerced their 100 Hz request to the lowest legal sample rate of ~1.6 kHz.

 

There are other subtler ways that different devices may have sample rates that differ by a fraction of a % -- but those would probably take a long time to produce the kind of error I have in mind.  At least quite a few minutes I'd think.

 

Sorry if I'm boring you with stuff you already know, but FWIW, a decent workaround for these things when you have a bunch of continuous sampling tasks spread across devices is this:

- use wait primitives (or possibly your timed loop) to drive loop timing *instead* of using DAQmx Read calls that request a constant # samples

- instead, wire in a -1 as the 'number of samples' in your calls to DAQmx Read.  For continuous tasks that has the special meaning, "give me all you've got", which will help prevent a buildup of backlogged data

- the downside is that you may be retrieving variable #'s of samples each time.  So if the app depends on fixed sized packets, you need another layer in the middle to manage *that* part.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW?
Message 6 of 15
(659 Views)

Below is a representative sample example of the code that I am dealing with.  The example uses the Timed Loop but a While Loop with or without a delay generates the same error message:

 

Error -200279 occurred at Untitled 1

Possible reason(s):

The application is not able to keep up with the hardware acquisition.
Increasing the buffer size, reading the data more frequently, or specifying a fixed number of samples to read instead of reading all available samples might correct the problem.

Property: RelativeTo
Corresponding Value: Current Read Position
Property: Offset
Corresponding Value: 0

Task Name: LVDTsA

 

Code snippet:

 

DAQmx Read.png

 

0 Kudos
Message 7 of 15
(640 Views)

Tasks in NI MAX.  Each separate task is on a separate board.

 

Thermocouple

task-TC.png

 

Analog Inputs,

task-AI.png

Load Cell, LVDTs

task-LC.png

 

RTDs

task-RTD.png

 

0 Kudos
Message 8 of 15
(636 Views)
Solution
Accepted by topic author Ray.R

It would be best to read 100ms worth of samples for every call of DAQmx read than leave it to read whatever is available.

 

In the task, increase your samples to read so that the samples have a large enough buffer to keep up with capture and not overflow. The biggest red flag I see is the LVDT, Load cell task that read 1kS/s but samples to read is just 10 but you read it only every 100ms, it does not match up quite.

 

Increate your samples to read to 1s worth of samples and ask for 100ms worth of samples for every call of DAQmx read and could ditch the timed loop too as your samples are hardware-timed, it would be near perfect (dependent on actual sampling rate). 

-Santhosh
Semiconductor Validation & Production Test
Soliton Technologies
NI CLD, CTD
LabVIEW + TestStand + TestStand Semiconductor Module (2013 - 2020)
NI STS for Mixed signal and RF
Message 9 of 15
(629 Views)

I think some combo of things Santhosh and I have said should help get you there.

 

1. To me, the biggest red flag *BY FAR* is the choice of N Channel 1 Sample reads.  You'd be much better off reading multiple samples per read call such as the 100 msec worth suggested by Santhosh.

 

2. I checked specs on a few of your devices and they do indeed have a variety of sample rate constraints.  So I'd urge you to do like the shipping examples and *QUERY* the actual sample rate after the call to configure DAQmx Timing.  Then use 1/10 of that as the # of samples to read per WHILE loop with no other delays in the loop.

 

3. If you put another delay in the While loop (or use a Timed Loop) you risk the timing overconstraint I mentioned before.  DAQmx Read alone can handle the loop timing.  (Again, compare to the shipping examples for continuous acquisition that let DAQmx Read pace their loops.)

 

4. Because you're dedicating a separate loop for each task, I'd agree with Santhosh that the method above is preferable to the one I described in msg #6 where you'd periodically request "all available samples".  That other method would be for cases where you serviced multiple tasks of different rates in a single loop.

 

5. I never really define tasks in MAX so I don't know what to make of the "Samples to Read" parameter there.  Because when the code makes its actual execution-time call to DAQmx Read, there's an input terminal for the # samples to read and a default value/behavior defined when you don't wire it.  Either way, it seems to me that the code value's gonna win out and that the MAX value is actually meaningless.

    Maybe the MAX value is supposed to be a buffer size suggestion?  (As to why I call it a suggestion rather than a definition, see further explanation here and then also my partly related gripe/suggestion about it).   But it's conceivable to me that the rules at the links don't apply the same way to a MAX task as they would in a call to DAQmx Timing.  In case that's true, you should make those buffers considerably bigger.  I'd want buffers of at least 1 second and I personally tend to make 5-10 second buffers more often than not.  We're still talking total memory in the single digits of MB probably.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW?
Message 10 of 15
(622 Views)