LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Theoretical Timing Question

Solved!
Go to solution

Hello everyone!

I have a question involving timing with the DAQmxbase functions. I have the task to record data with a static sample rate (it will be greater than 100 000 points a second in theory) and I need to display this a write the data to file in real time. So my idea was to set the sample clock at whatever the desired sample rate (lets say 100 000 for now) and I would have a standard state machine loop (just a while loop with shift registers and a case statement inside) to gather data then write it to the file then decimate the data to display it to the user. Each of these iterations, the amount of data read would be a tenth of the sample rate so I thought I could just count the iterations until I got to the sample rate * 10 which would be the total amount of time. Unfortunately after running this program for a desired 20 minutes at 50,000 points per second, I noticed that the iterations got consistantly longer over time, but it was so small that I did not notice when I tried it at 3 to 5 minutes. Does this has to do with the decimation of data slowing down the program because it has to reallocate the array every time or does it have to do with my flawed thinking of how to time our voltage recordings?

 

Information

-LabVIEW 2014

-DAQmxbase package used

-MAC OS X system

-NI USB-6211

 

Needs

-Sample rate > 100 000 points a second

-record for a specific amount of seconds by the user

-writes to file and displays data to user

 

Problems

-while loop iterations are running longer than 0.1 of a second making the program continue to record voltage longer than it should

 

Please, if you don't understand or I forgot some important information please let me know! 

0 Kudos
Message 1 of 8
(3,094 Views)
Solution
Accepted by topic author Scopes

1. State Machine is NOT the right architecture here.  You need to keep that DAQ look running fast enough to keep up with the data coming into the DAQ.  Instead, use a Producer/Consumer.  The idea here is that you have parallel loops that do the data logging and processing of the data and leave the DAQ loop to just read the data as it comes in.

2. Timing in windows is not reliable at all.  Don't trust it.

3. Use the Configure DAQmx Timing function (assuming it is available in the DAQmx Base) to make DAQmx stream the data to your TDMS file.  It is one less thing for you to worry about and it is a lot more efficient than you would be able to make it.

4. If you take my advice in (1), then you should read a set number of points each loop iteration.  Let's just say 10kS.  So each loop iteration will take 100ms.  But for your decimation, you just take every 10th sample.  There is a simple Decimate 1D Array that you can use to make it easier on yourself.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 2 of 8
(3,074 Views)

Thanks for you input. I searched for a timing function in DAQmxbase and it only allows you to set the amount of samples per second. Is that what you had in mind for #3?

0 Kudos
Message 3 of 8
(3,063 Views)

@Scopes wrote:

Thanks for you input. I searched for a timing function in DAQmxbase and it only allows you to set the amount of samples per second. Is that what you had in mind for #3?


Ooops.  Mind running faster than my fingers.  That should be DAQmx Configure Logging.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 4 of 8
(3,055 Views)

I do not think DAQmx Base has that capability built in. You may need to use the TDMS functions from the File palette to get the data to a TDMS file.

 

To paraphrase crossrulz on item 2: Timing in OS X is not reliable, either.  Often things work quite well for a while in either OS but then a big delay intrudes.

 

Lynn

Message 5 of 8
(3,053 Views)

SO if the OS X timing is unreliable. Is there a way that I know for sure that the producer loop will produce data at a constant interval? Because if I know I can do that, I think I will be good.

0 Kudos
Message 6 of 8
(3,027 Views)

And a sidenote, the TDMS file storage is used at the moment, I just had it in the same loop as the data acquisition. I think I will move that to my new consumer loop.

0 Kudos
Message 7 of 8
(3,024 Views)

The data acquisition timing is set by the clock oscillator (crystal controlled) on the DAQ device. So when you select a sample rate of 100 kHz, you get 100000 sample per second within the accuracy of the on-board clock which is typically +/-50 or 100 parts per million. The only remaining issue is whether your software can read the data fast enough to keep up. That is where the Producer/Consumer architecture comes in.

 

I do not have any DAQ devices which have sampling rates that high to test.

 

Lynn

Message 8 of 8
(3,017 Views)