Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx read time at first iteration

I am implementing an analogue waveform acquisition software using LabVIEW 2019 and a cDAQ 9189 mounting a NI 9221 ADC module.

 

After configuring the number of channels I want to read I am using the DAQmx Read VI (Analog 1D Wfm, NChan N Samp) to acquire the waveform. See the attached VI.

Notice that the sample rate is 1000 S/s and the samples per channel are 100.

The VI is called inside a while loop each time a SW event is triggered.

 

Now comes the problem. I measured the time it takes to execute the DAQmx Read VI by using the Tick Counts (ms) function (not present in the uploaded version of the VI) and it turns out that the first time the DAQmx Read VI is called it takes around 400ms to execute. In the following iterations the time is always arounf 100ms.

 

Why is that so?

 

0 Kudos
Message 1 of 4
(856 Views)

All due respect, but there's an awful lot of threads related to execution speed where the measurement method is significantly flawed.  I can't rule that out when you don't show the method.

 

The value of 'samples per channel' (used to configure DAQmx Timing ) on the front panel is 1000, not 100.

 

The similar input to DAQmx Read is left unwired, so the default value of -1 will be used.  That value has a special meaning for a Finite Sampling task -- it means "wait until all the samples are acquired before returning them all at once".

 

100 msec makes sense if DAQmx Timing is set up with 1000 Hz sample rate and 100 samples per channel.  Unsure what to say about the 400 msec without seeing your benchmarking method.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 4
(848 Views)

Thank you for your answer Kevin.

I am aware that is almost impossible to replicate my problem, but I cannot come up with some simple example to share. The actual setup I am using is a little too complex to ask people to implement, since it involves some physical hardware (CDAQ and ADC module).

Basically I asked the question just in case I am loosing some bit of information about the DAQmx Read VI. For instance, the first time it is called it could perform some kind of configuration task that takes some time.

 


All due respect, but there's an awful lot of threads related to execution speed where the measurement method is significantly flawed.  I can't rule that out when you don't show the method.


Please, find attached the VI with the measurement method I'm using. Anyway, I am seeing this delay in the first call also by looking at the waveform I am collecting with the cDAQ. So I am quite sure that the first time I call the DAQmx Read VI it actually takes more time to execute.

 


The value of 'samples per channel' (used to configure DAQmx Timing ) on the front panel is 1000, not 100.


Yep, but I'm calling the VI passing thew value 100 to it.

 

The similar input to DAQmx Read is left unwired, so the default value of -1 will be used.  That value has a special meaning for a Finite Sampling task -- it means "wait until all the samples are acquired before returning them all at once".


This is actually what I want to do.

Anyway wiring it to the "samples per channel" control doesn't change things at all in this case.

 


100 msec makes sense if DAQmx Timing is set up with 1000 Hz sample rate and 100 samples per channel.  Unsure what to say about the 400 msec without seeing your benchmarking method.


I agree that it makes sense.

0 Kudos
Message 3 of 4
(811 Views)

I don't see anything I can identify as a timing measurement method.   A lot of subvi's are missing here, but nothing about their input & output wires suggest that any of them would be doing the measurement either.

 

A first call to the whole config & run subvi "readAllAIWaveform.vi" might take a little longer when it configures the task for the first time.  Re-uses of the subvi can be a little quicker b/c (for example) the same-sized task buffer is likely already available and doesn't need to be freshly allocated.

 

With desktop DAQ cards, I'd expect something in the 10's of msec overhead for the configuration.  I have a vague memory of a years-old thread where I was surprised to find significantly more overhead from  someone's cDAQ system (which I was later able to pretty much replicate), but can't seem to find it now.  My recollection was that it approached 100 msec simply to stop and restart.

 

400 msec still sounds unusually long, and I still can't confirm anything about your measurement method.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 4
(804 Views)