Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Data Acquisiton Using DAQ PCI 6221 and LabView 8.5

Hello everyone,

 

I have a very specific question regarding the data acquisition with a PCI 6221 multifunction card. In order to understand the question, I will first give a short outline of what I did so far:

 

I am trying to perform a continuous data acquisition (analog input) task. After initializing the virtual channels, the task is started and the DAQmx.Read (Analog 1D Wfm, NChan, NSamp) Vi is continuously called in a while loop. To my understanding, this means that the A/D converter is continously converting the Data, while the PC collects the data from the buffer only from time to time, when DAQmx.Read is called. The frequency for the calling of DAQmx.Read can be defined by the "number of samples per channel", which I usally set to be 1/10 of the total data acqusition rate in order to avoid overwrite errors. Thus, the DAQmx.Read is called with a frequency of 10 Hz.

 

The next steps are to display and save the data. I created a producer-consumer architecture using queues (producer: loop with DAQmx.Read, consumer1: putting data into an array and display it, consumer2: save data to HDD). The array of waveforms of DaqmxRead.vi is enqueued in the producer loop after executing. This array is then sent to the consumers. As each waveform consists of many data points, I using the Mean.vi [NI_AALBase.lvlib] to reduce the noise and amount of data points. For consumer 1, the display, the Mean function just takes the mean of all data arriving in the queue meaning that the data shown on the display updates at the same rate as the DaqmxRead.vi produces data. This also implies that the time separation between the data on the display is 100 ms for the case of the above mentioned Read-Out rate.

 

However, as I sometimes need higher time resolutions than 100 ms, the second consumer works differently: The data arrays arriving through the queue are divided into chunks in order to obtain a higher time resolution than the aforementioned 100 ms. For example, lets say that each waveform contains 2000 data points (i.e. voltages), which where recorded using a sampling rate of 20kHz. This means that the waveforms are separated by 100 ms, whereas the single data point within a waveform are separated by 50 µs. If a time resolution of 1ms is desired, this would mean that consumer2 takes the Waveform and divides the Y-component of the waveform into 100 packages of 20 y-values, which are then averaged (also see the image attached). This surely decreases the signal-to-noise ratio as only 20 instead of 2000 data points are used for averaging.

 

But here comes the problem: When I need a data acquisition rate of 20 Hz for the evaluation (its about the rate of the data written to the HDD), I have two options in my current VI: Either I decrease the number of samples for the DAQmxREAD.vi [e.g. sampling rate 10 kHz, number of samples, 500)] and tell consumer2 to average over the whole package of 500 points arriving in the queue OR I still use a number of samples of 1000, but tell consumer2 to divide the package into to parts and average over each package respectively. I checked both, and the Signal-To-Noise ratio is significantly better when I use a higher Readout rate (calling DaqmxRead more often) instead of subdividing the data packages from the DaqmxRead. Note that this is at the same sampling rate, implying that in the end, the amount of data points which are used for averaging are the same.

Is this an expected behaviour? Is there something wrong with the data averaging I described? I surely can provide a more complete example if necessary.

 

Thank you in advance and best wishes,

Phage

 

 

 

 

0 Kudos
Message 1 of 13
(3,810 Views)

I'll venture that you've got a bug in your code somewhere.  If implemented correctly, it won't matter whether you read, enqueue, dequeue, and average in 500-sample chunks or whether you read, enqueue, and dequeue in 1000-sample chunks then average the two 500-sample halves of this packet.

 

Check your code more carefully, I think you'll find that it isn't doing what you intend in at least one of your two situations.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 13
(3,772 Views)

Thank you for your reply, Kevin.

 

In principle, I also first thought that this is a bug in my code. To check, whether the averaging is done correctly or not, I tried the following:

 

Record data at 40 S/s from a simulated device and saving the data to 2 files. In one file, the data is averaged over 4 data points (leading to 10 data points saved per secone). In the second file, the data is not averaged at all, implying 40 data points per second. When I do the averaging of file 2 afterwards over 4 data points (not in LabView), the results data is exactly the same as the results of file 1. Thus, to me the averaging seems to work exactly as intended (for comparison see the attached Averaging.tif).

 

I furthermore compared thw two different averaging methods using the simulated device (Comparison.tif). There it does not seem to matter whether the data is read out in chunks of 4 data points and then averaged to 2 data points or directly read out as 2 data points.

 

However, doing exactly the same thing using the real measurement set-up and a simple resistor still gives me a systematic error/noise (see Measurement.tif). How exactly does the DaqmxRead.vi determine if enough samples are available in the buffer? For a multiple-channel acquisition, are the samples read out in a manner: 2000 from Channel 1, 2000 from CHannel 2 or  1 from Channel 1, 1 from Channel 2 (repeat for 2000 times)? Do you maybe have a suggestion to try another "experiment" in order to check for errors in my code?

 

Thank you very much!

 

 

Download All
0 Kudos
Message 3 of 13
(3,756 Views)

Just a couple quick thoughts:

 

1. The data set shown in black in the "Measurement" pic has a suspicious look to it, the triangular shape of fluctuations is suspiciously regular by comparison with the other data set.

 

2. As an experiment, break the problem down into parts.  First just capture a good-sized data set and store it to file.  Then pull data from that file (instead of from live DAQ) when you try your two methods for enqueueing and averaging.  See which one is doing things right, then you can fix the one that's doing something wrong.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 13
(3,737 Views)

1. You are absolutely right. The triangular shape is also what puzzled me most. The only idea I have for this is that there is noise with a frequency of 10 Hz and the data is always read at the minimum or maximum. As the read out rate has a frequency of 10 Hz, my question still persists, if there is any kind of electronical signal, which might intefere with the data, when the DaqmxRead is called.

 

2. This is a good idea! I will also try to do the same which I did with the simulated device with a real measurement (means: use both methods simultaneaously for comparison) and hopefully come to a conclusion. I am still wondering why I did not observe this behaviour using the simulated device, but maybe it is due to the large slope of the simulated data which hides the "oscillations".

0 Kudos
Message 5 of 13
(3,727 Views)

I now tried wihtout using the live DAQ, but a measurement file, which I created previously (using a sampling rate of 20 kHz and saving all the data to the HDD).

 

In the producer a certain sub-array of the measurement file was read (analogous to the DaqmxRead, which produces a certain chunk of data).This data was further averaged in the consumer loop. For comparison, I also manually divided the data in the measurement file into chunks of 1000 or 2000 data points and averaged these in a separate program (not LabView). The result is that it does not matter whether the data is read in larger portions, enqueued, separated into chunks and then averaged, or read in smaller portions, enqueued and averaged, when I use the measurement file. Thus, I assume that the architecture in principle is doing, what it is supposed to do. Unfortunately, this does not help me with my initial problem.

 

 

0 Kudos
Message 6 of 13
(3,581 Views)

I just went back and read the original post more carefully.  Looking to clarify: you seem to have 1 producer loop and 2 or more consumer loops.  Does each consumer loop have its own dedicated queue with the producer loop feeding identical copies of data to each of the queues? 

 

Aside from that, I think it's time to post code.

 


@Phage wrote:

Thus, I assume that the architecture in principle is doing, what it is supposed to do. Unfortunately, this does not help me with my initial problem. 

I don't think that's the right lesson to take away from the experiment using fixed data from a file.  Instead, you've confirmed that the queues and averaging functions do in fact behave properly.  You've narrowed your search for the problem.  It must lie somewhere in the relatively few parts of the code that are different between your file-data experiment and your normal DAQ-data runs.  I expect this is almost entirely on the producer side, so that's very likely the code to focus on and post.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 7 of 13
(3,575 Views)

There are two consumers. Each consumers has its own, dedicated queue, which are fed with identical copies of data. The main difference is that the queue for the consumer, which stores data to the HDD, is only active upon the user's request.

 

You are right, this experiment just showed that the averaging works properly. This is what I meant by "architecture". I attached the VI where I deleted some unnecessary code. However, I only deleted the code of the first consumer, which is used for displaying the data. If I should delete more to alleviate the reading of the code, please let me know. You may ask why I did not use sub-vis to increase the re-usability of the code: In the beginning, I tried to write a VI, which runs as fast as possible (as the computer we are using for data acquisition are sometimes slow). As I read about the overhead caused by SubVis, I tried to avoid these, which I now regret.

 

Thank you very much for taking your time!

 

 

0 Kudos
Message 8 of 13
(3,555 Views)

It's not too late to create subvis.  Draw a rectangle around the code you'd like to package up as a subvi, then click the menu item "Edit->Create SubVI".

 

I guessed wrong -- I see no reason for concern in the producer loop.  (Except you may want to let the other Enqueue functions cause loop termination on error).

 

Nothing jumps out in the consumer loop immediately, though I didn't try to trace every wire detail related to calcs & array subsets for averaging.  You could do yourself a favor by using the built-in function "Decimate (single shot).vi" and set averaging = True.  That'd eliminate the loop and array subset stuff that's potentially a tricky area and where a bug may still be lurking.

 

I'd look very closely at whatever's different between this code and your experiment where you pulled data from file instead of DAQ.   Only the producer loop *needed* to change, but since there's nothing there to explain the odd behavior you've observed, I'm suspicious that the different results are due to a subtle difference in the consumers (or the inputs to the consumer loop, used to chop the data into packets for averaging).

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 9 of 13
(3,549 Views)

That way of creating a SubVi is fantastic! I should have contacted you much earlier! I was not aware of this function and I guess I will give it a try. As you pointed out, this would eliminate some of the calculations and enhance the readibility of the code.

 

I attached the version of the VI, where I read data from a file and averaged this.

 

 

0 Kudos
Message 10 of 13
(3,529 Views)